Posts tagged with "ai"
- The past cannot be changed. But I wonder if we are losing a moment to change the future.
1/20/2026
This Martin Luther King Jr. Day, I found myself reflecting on the immense gap of privilege created during the era of American slavery. Slavery existed globally, but its impact on Black Americans was uniquely devastating. While there are more reasons for this than I’m even aware of, two in particular strike me presently. First, the Industrial Revolution generated tremendous prosperity that Black Americans were excluded from. Second, they were unable to participate in the opportunity of cheap, claimable land.
My own family’s history benefited from this land. My great-grandfather arrived in the Pacific Northwest and purchased farmland. He established a family farming corporation that accumulated numerous plots of land which our family still owns today. As the last of the family farmers prepares to retire, that land will be sold, resulting in a substantial profit.
These historical factors placed one group of Americans at an incredible advantage while another was fighting for the basic rights promised, but not yet bestowed, by the U.S. Constitution. By the time the Civil Rights movement secured these rights, the economic damage was already deep-seated.
Artificial Intelligence represents a unique moment in our history. It holds the potential to bridge this gap. However, I fear the opposite will occur. I fear that AI will not decrease the gap, but rather widen it until it becomes insurmountable.
With hindsight, we look back at the past with shame, wishing we could change what transpired. We cannot change the past. But I wonder if we are currently overlooking a crucial moment to change the future, or if we will repeat the sins of our fathers?
- Potential and Actual Intelligence: How Human Thinking Differs from AI Thinking
1/19/2026
After listening to the EconTalk podcast episode Nature, Nurture, and Identical Twins (with David Bessis), I read the essay David Bessis wrote, Twins reared apart do not exist, which was the subject of the episode. Both were informative and helped me understand the conversation between hereditarians and blank‑slatists. But something else caught my attention in the essay.
Bessis begins with three illustrations of potential values for the heritability of IQ, 30%, 50%, and 80%. He maps genetic potential against actual IQ for each percentage. In this post I’m not addressing questions about IQ, its measurement, or use. What sparked my interest is the distinction between potential and actual, and whether that distinction adds to the conversation on what differentiates human thinking from AI thinking.
It seems to me that genetics does set limits on the range of capacity each human is able to achieve in thinking, but it is not a predetermined number. Other factors have significant influence on where each human ends up in this range. Therefore, there is some capacity the individual human can achieve, but there is also some measured value designating where they currently are.
This leads me to believe that there are many factors that influence how the brain thinks. It’s not about crunching data about a question and ending up at a result. Instead there’s a lot of seemingly unrelated data accumulated over a lifetime of experiences which mingle in the brain, impacting pathways on the way to producing the thought.
How does AI compare when mapping genetic potential and actual IQ to machines? Is it accurate to say that their training is their genetic potential and then reinforcement learning is their actual potential? I don’t think it is. In humans their actual potential continues to be shaped from their interactions in the world. The experiences humans have influence us. For AI, once the model is released it is fixed. Additional information can be provided to them which influences their generated responses, but their intelligence is locked-in; their genetic potential is still their actual intelligence.
In the end an AI’s intelligence is hereditary rather than blank‑slate, and that leads to a very different form of thinking. AI thinking is not human thinking.
- Our Role Using LLMs
1/5/2026
Over the last few years, our role in working with generative AI has been shifting. Each year, the work moves a little further away from “writing the perfect prompt” and a little closer to shaping how AI operates in real environments.
2024: Prompt Engineering
Our role was crafting prompts to draw out of AI the knowledge and behavior we desired.
But prompting isn’t enough when the AI doesn’t have the right information.
2025: Context Engineering
Our role was providing the AI with the context so that it had the relevant information for the task. This also provided guardrails, focusing it on the aspect of the desired task instead of straying into other areas. It was provided new abilities through tools, allowing it to gather its own context.
2026: Teaching
Once AI can retrieve context for itself, the next challenge becomes how it interprets and applies the information.
Our role will be in guiding the AI on the context it retrieves. We will need to provide correction when it applies information mistakenly, as a result of unawareness of the complete task, or oversight of pertinent information they are not referencing. It needs to be instructed on how to wield the great knowledge it has access to.
AI and knowledge work
This describes a broader transition in knowledge work.
AI excels at knowledge work. However, knowledge work is not limited to performing a task. Currently, information about the work, what is needed, is spread out across many systems, channels, people, etc. It still takes humans to be knowledgeable about the higher goal and where to seek information to create a complete picture of what is needed, or pull out what is desired.
In my previous position I worked at a financial institution. The software development department did not build the primary systems. Instead it created systems to integrate these systems together, which allowed both employees and customers a view of this data and perform operations. This provided a distinct advantage to the institution, because information was not siloed.
Our relationship with AI seems to be following a similar model. AI excels at performing tasks, but it still requires human oversight to bring everything together, connect the systems, and draw out the information.
The progression from prompt engineering to context engineering to teaching is really a shift in where the human value sits: less in producing the output, and more in guiding how the output is produced.
- AI Promoter, AI Detractor
12/30/2025
I find myself listening to many strong advocates of AI. I can feel the pull of the hype, and I can see how that exposure has subtly biased my expectations.
That bias became more apparent while listening to the podcast Vibe Coding Manifesto: Why Claude Code Isn’t It & What Comes After the IDE. As Steve Yegge described both his current practices and his vision of the future, I found myself increasingly skeptical.
Soon after, I encountered two pieces that pushed back against this vision. The first was The Future of Software Development is Software Developers, which reasserts the central role of software developers:
But, when it matters, there will be a software developer at the wheel. And, if Jevons is to be believed, probably even more of us.
The second, more forceful critique came from Rich Hickey in Thanks AI!:
When did we stop considering things failures that create more problems than they solve?
I have been a software developer for 25 years. My scope has been limited, and my projects have generally been on the smaller side. Over that time, I have learned where I have over-invested, making systems more complex than necessary, and where I have under-invested, missing opportunities as a result. I have also learned that I cannot keep up with every new innovation, nor should I try.
So what do I make of AI in software development?
I do see it as a powerful tool. People will use it in many different ways, and that experimentation matters. But these uses are still experiments. Their successes and failures will shape what comes next. Progress has almost always worked this way.
What feels different now is the perceived cost of hesitation. With AI, the fear of being left behind feels stronger than usual, especially given that caution is typically the default.
I count myself among those who feel that pull. My hope is to proceed with awareness, to experiment deliberately, and to form my own perspective through experience rather than hype.
- AI Will Make Our Children Stupid
12/20/2025
Commenting AI will make our children stupid: We are creating a terrible learning environment for the young
The line of thinking expressed in the article:
- IQ is declining
- Attention spans weakening
- AI allows children to outsource their thinking entirely
- They possess the answer but lack the understanding of how it was derived
- Those in authority believe exams need to be abolished in order to embrace AI
- The process of writing is itself constitutive of understanding. Writing is thinking.
- Learning requires friction
The article paints a bleak future. My thoughts, pushing back on their position:
- It seems for a few generations now that prior generations have been concerned about the softening of the younger generation. To me generations are different, but are still capable.
- Each person will leverage AI is a jagged manner. Each will outsource some portion of their thinking. Some will do so to a concerning amount, and to their detriment. My concern is that the advantaged will more responsibly do so, because of oversight and training.
- Education does need to change. Many have thought so for a longtime. But for a variety of reasons eduction is either slow or resistant to change. AI may force the change to occur.
- Friction is core to learning, but learning is even better it it is hard fun.
- Humans have adapted amazingly well to the changes and different environments we have found ourselves in. Granted, we haven’t done so perfectly, and the price at times is costly, but we continue forward. That won’t always be the case, but I’m optimistic we will continue to adapt and leverage AI to obtain even more amazing progress.
- The Value of Software Engineers
12/20/2025
Quoting Robert Greiner from Believe the Checkbook: AI companies talk as if engineering is over. Their acquisitions say the opposite.
The key constraint is obvious once you say it out loud. The bottleneck isn’t code production, it is judgment.
Regarding Athropic’s language used in acquiring Bun:
That’s investor-speak for: we’re paying for how these people think, what they choose not to build, which tradeoffs they make under pressure. They didn’t buy a pile of code. They bought a track record of correct calls in a complex, fast-moving domain.
- Quoting Andrej Karpathy regarding LLM intelligence Animals vs Ghosts
10/21/2025
Quoting Andrej Karpathy regarding LLM intelligence Animals vs Ghosts
In my mind, animals are not an example of this at all - they are prepackaged with a ton of intelligence by evolution and the learning they do is quite minimal overall (example: Zebra at birth). Putting our engineering hats on, we’re not going to redo evolution. But with LLMs we have stumbled by an alternative approach to “prepackage” a ton of intelligence in a neural network - not by evolution, but by predicting the next token over the internet. This approach leads to a different kind of entity in the intelligence space. Distinct from animals, more like ghosts or spirits. But we can (and should) make them more animal like over time and in some ways that’s what a lot of frontier work is about.
That’s beautifully said. It paints a vivid picture of how LLM intelligence differs from biological intelligence. By training on the collective content of the internet, these models become a form of us, our past selves, our ghosts. We recognize an intelligence in them, but it’s a mistake to equate it with human or animal intelligence.
Is the current intelligence enough for AGI? Will the next AI winter come from trying to make models more like animals? Is that even a wise path to pursue?
I don’t think today’s intelligence is sufficient for true AGI. As Karpathy pointed out, it’s a fundamentally different kind of intelligence. I don’t see how this architecture evolves into something truly general. It can get closer, sure, but there will always be holes needing to be plugged. This will bring forth the next AI winter, until the next breakthrough is discovered and our capabilities reach the next level.
Still, I’m uneasy about that pursuit. There’s already so much potential in what we have now. Entire industries and creative fields haven’t even begun to fully explore it. And as a society, we’re not prepared for the intelligence we already face. However, it is in our nature to always be progressing. Perhaps by the time the next breakthrough occurs, society will have adjusted to the current level of intelligence, better preparing us us for the next level.
- Responding to Has AI stolen the satisfaction from programming?
10/13/2025
Responding to Has AI stolen the satisfaction from programming?
Questions similar to this have been brought up previously, and will continue to be for the foreseeable future. One point he mentioned sparked a thought I had:
The steering and judgment I apply to AI outputs is invisible. Nobody sees which suggestions I rejected, how I refined the prompts, or what decisions I made. So all credit flows to the AI by default.
Invisibility of effort is not new. People do not immediately arrive at an answer. This is true across domains. This is similar to the term “overnight success”. It may be the case something unexpectedly takes off, but all the work it took to get to the state where it could take off is forgotten.
When I write code I spend a lot of my time reworking it, massaging it, expressing it well. I may have gotten to a working solution quickly but it took much longer to get to a final solution. All of this effort is not seen by others, nor do I receive “credit”. Perhaps that is why the author does not recognize this similarity in other people’s end product, the hard work to get it to this state is not immediately obvious.
AI has changed where we receive satisfaction. The invisibility of the effort has always been true, but that doesn’t preclude satisfaction of the process and end result.
- Quoting Toby Stuart on Pedigree
10/6/2025
Quoting Toby Stuart on the EconTalk episode The Invisible Hierarchies that Rule Our World
The episode ended with a discussion on the impact of AI as it relates to people’s status, that it will reinforce the prestige hierarchy.
When you can’t judge quality that’s precisely the time in which you rely on pedigree.
He goes on to say:
So if you are a college admissions officer, take that problem at a place where a lot of people want to go, it’s really hard to read an essay and say “I’m admitting them because this is an outstanding essay,” if that ever happened. But what you can do is, I’ve heard of the high school, or there is some other status marker in the background, so I’m going to overweight that relative to information that formerly was a signal but it’s just noise.
Writing has been used just about everywhere to evaluate people. Now the capability of crafting well written content is available to everyone. Consequently, we become more reliant on other indicators for evaluation, which, unfortunately, are often characteristics over which people have limited control
This line of thought presents a sobering reality. What was once seen as an equalizer for those lacking inherited advantages potentially turns out to be detrimental to their advancement.
- Responding to I Do Not Want to Be a Programmer Anymore
10/5/2025
Responding to I Do Not Want to Be a Programmer Anymore (After Losing an Argument to AI and My Wife)
The article begins by sharing a story of attempting to use AI to resolve a difference of opinion with his wife, which convinced him he was wrong. His wife reaction:
It wasn’t the victory that stuck with her. It was how easily I surrendered my judgment to a machine.
He gives another example from work, from which he writes:
That’s the unsettling part. We don’t just listen to the machine; we believe it. We defer to it. And sometimes, we even prefer its certainty over the reasoning of the actual humans in front of us.
His concerning conclusion:
Wisdom has always come as the byproduct of experience. But if experience itself is outsourced to machines, where will the young earn theirs?
I also have experienced myself being resistant to the arguments of another only to be won over by consulting a LLM and reasoning through the arguments. In part this seems reasonable, the ideas of others which are contrary to our own are costly for us. Ideas which we arrive at, or we think we arrive at, on our own we believe we have already been through the work to vet.
Therefore, the question is whether we ask AI’s answer on the first take, or do we go back and forth with the AI examining the rationale. The first is concerning, to blindly accept the response without any further examination. But I suspect that is not what occurs in most use cases. Instead we become convinced by it because it is a nonthreatening way to explore the topic. I wonder if there is intimations of that when he says:
Clients, colleagues, even strangers are emboldened not because the machine gives them ideas, but because it gives them confidence.
When he provides the example at work the person sent him a “detailed breakdown” of how to improve the system. It sounds to me the person invested a lot of effort and thought into this, not quickly typed a question and forwarded on the AI response.
Circling back to his concern about wisdom, or lack of, I believe this highlights the need for relationship. If relationships continue to erode, lack of mentorship, and trust in AI continues to rise then is wisdom lost?
It feels this may be the case. But humans still accumulate experiences, from both our failures and triumphs. And from those experiences wisdom will still either be derived or ignored. It’s hard to imagine a complete loss of wisdom. Even the author gain wisdom from the experiencing of bringing AI into the conversation with his wife. There is precious wisdom humankind has obtained across our existence, which would be a tragedy to lose. But I have a hope in humanity, that we will continue to push forward and adapt, accumulating wisdom. It is in our nature, I don’t think we can do anything otherwise.
- Responding to The real deadline isn't when AI outsmarts us — it’s when we stop using our own minds
10/5/2025
Replying to “You have 18 months” The real deadline isn’t when AI outsmarts us — it’s when we stop using our own minds.
And I am much more concerned about the decline of thinking people than I am about the rise of thinking machines.
I’m not precisely concerned about this. I don’t believe my thinking has declined since using these tools. Maybe they have in some trivial ways. But I believe my thinking has become more active as a result of these tools, because I am able to explore and ask questions, to investigate in ways I either was not able to before, or at least not as easily.
My concern is that there will be a division between people on how they use these tools. One side’s thinking will decline, while the other side’s thinking will be enhanced, which will lead to a further imbalance in society. It appears the statistics he references support this, the declines he reports are not coming from those who already reported as high.
The author later he answers the question about what kids should study:
While I don’t know what field any particular student should major in, I do feel strongly about what skill they should value: It’s the very same skill that I see in decline. It’s the patience to read long and complex texts; to hold conflicting ideas in our heads and enjoy their dissonance
While I do not entirely agree with his phrasing, or at least I am uncertain with how he phrased it, I do believe being able to work with conflicting ideas is an important skill. Perhaps if someone “enjoys” the dissonance they become energized and thrive in these situations. And so maybe the language is not too strong. But at the minimum I have found being able to wrestle with conflicting ideas to be an important life skill.
- Quoting Richard Matthew Stallman in Reasons not to use ChatGPT
10/3/2025
Quoting Richard Matthew Stallman in Reasons not to use ChatGPT:
It does not know what its output means. It has no idea that words can mean anything.… people should not trust systems that mindlessly play with words to be correct in what those words mean.
My initial reaction is agreement. But then what are the implications? Does it follow that machines will never have intelligence? What makes our intelligence different? A calculator doesn’t know what its output means. Should we trust it? Is the way the calculator works with numbers significantly different than the way the LLM works with words. It’s empirically different, but when attempting to consider each domain they operate in is it significantly different?
Did he think about all of this and then come to his conclusion?
These are all interesting questions and conversations we should be having but I wonder if his real complaint is included in his trailing thoughts:
Another reason to reject ChatGPT in particular is that users cannot get a copy of it.
- The "banality of evil"
9/16/2025
I started reading Hannah Arendt. When she attended the trial of Adolf Eichmann, a Nazi and an organizer of the Holocaust, she observed what she called the “banality of evil”. Here’s a quote, emphasis mine.
The deeds were monstrous, but the doer—at least the very effective one now on trial—was quite ordinary, commonplace, and neither demonic nor monstrous. There was no sign in him of firm ideological convictions or of specific evil motives, and the only notable characteristic one could detect in his past behavior as well as in his behavior during the trial and throughout the pre-trial police examination was something entirely negative: it was not stupidity but thoughtlessness.
A few weeks ago I came across the opinion piece Will AI Destroy or Reinvent Education? I think this piece has lots of good thoughts.
The piece begins with talking through two articles about an MIT study on the impact of AI on our brain when using it in writing, Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task. The paper found LLM use led to weaker neural connectivity and less cognitive engagement.
Initially I thought, this is concerning and a good reason to have an awareness of how we use AI. But if people use it this way, it is to their detriment.
After reading Arendt, I’m even more concerned. Could AI use be fostering a new kind of “thoughtlessness”? If so, might horrific deeds be carried out by people who are “quite ordinary, commonplace, and neither demonic nor monstrous” simply because they stopped thinking critically?
- More empathy for that robot than for each other
9/15/2025
Quoting Jessica Kerr, Austin Parker, Ken Rimple and Dr. Cat Hicks from AI/LLM in Software Teams: What’s Working and What’s Next
Empathy
People will do things for AI that they won’t do for each other. They’ll check the outcomes. They’ll make things explicit. They’ll document things. They’ll add tests.… And all of these things help people, but we weren’t willing to help the people. It’s almost like we have more empathy for that robot than for each other… we can imagine that the AI really doesn’t know this stuff and really needs this information. And at some level, we can’t actually imagine a human that doesn’t know what we do.
This comparison creates a visceral blow, because I feel it describes me. I consider myself a fairly empathic person, but I’m slow to create this information for other humans, yet find myself more willing to do so for AI.
Why do we behave this way? Here are some theories. Different expectation levels, ex: AI doesn’t have this background knowledge, but humans should, or at least can figure it out. Comparison and competition between ourselves and others. The impact is immediate when working with the AI, but unknown and in the future for humans. More self-serving when providing these to the AI, at least in the near term again.
Even with these plausible explanations, I can’t quite get myself off the hook. This nagging self-awareness, however, doesn’t diminish my fear that my behavior will remain unchanged.
Participation
Another topic of this interview deserves mention:
…have a training data problems, right? And we can question what we use it for, but it’s very difficult to do that if you sit outside of it. If you set yourself apart, you have to participate.
I do think that is incumbent upon us to grapple with, you know, the reality we’re faced with… We have the universal function approximator finally and there’s no putting that toothpaste back in the tube, so we can figure out how to build empathetic systems of people and technology that are humanistic in nature, or we can let the people whose moral compass orients slightly towards their bank account make those decisions, and I know which side of it I’m on.
AI is here and it will change a lot of things. It’s understandable to be worried about the negative impact of AI, but letting that prevent you from engaging is a way of sitting on the sidelines. Instead, we have a duty to participate and shape its future.
- From magic to understanding to magic again
9/12/2025
Ethan Mollick writes that our relationship with AI is shifting, from being a partner we create with to the AI performing the work itself, On Working with Wizards. This aligns with the pursuit of AI agents, 2025 being dubbed the year of the agent. While Mollick doesn’t explicitly discuss agents, he uses the term “wizards” to describe a similar concept. He calls them wizards because:
Magic gets done, but we don’t always know what to do with the results
…wizards don’t want my help and work in secretive ways that even they can’t explain.
This presents two important challenges. First, we lose, or never develop, the skill to evaluate what was produced. We lose ground, and so we are forced to trust more. As Mollick states:
every time we hand work to a wizard, we lose a chance to develop our own expertise, to build the very judgment we need to evaluate the wizard’s work.
But what I found especially striking is that throughout time, when we did not understand something, it was considered magic. Science came along and brought a method of understanding of how things work. Technology replaced magic. Is this direction now reversing? Are we losing our understanding, is magic returning? Mollick writes:
The paradox of working with AI wizards is that competence and opacity rise together. We need these tools most for the tasks where we’re least able to verify them. It’s the old lesson from fairy tales: the better the magic, the deeper the mystery.
This is a shocking realization. While many would push back against this analogy, it seems to be at least partially accurate. It raises a pressing question: Why are we so willing to trade our understanding and accept the magic in this context?
- What would you say… you do here?
9/12/2025
A constant complaint I’ve heard from software developers is that there isn’t a product owner. No one is creating requirements, no one is curating the backlog. Instead the software delivery team attempts to suss out how applications and platforms are to be built. Fair enough, it’s not very efficient to be given a high level description of something and then have to determine what it means.
I’m not going to analyze or offer my thoughts on this predicament, but I was reflecting on it while considering my current software development workflow:
- Receive a high level feature request
- Use AI to create detailed requirements based on the request and the state of the current application
- Edit the generated requirements
- Provide the final version of the requirements to an AI agent to implement
- …
If the desires of software developers were fulfilled then pristine requirements would be created by a product owner. Software developers would then hand off the requirements to an AI agent for implementation, thus making software developers then new Tom, the product manager from Office Space.
What would you say… you do here?
- Three different takes on hallucinations
9/10/2025
I’ve read three different takes on hallucinations this week, and what struck me most was not how they agreed, but how differently they framed the problem. Each piece approaches hallucinations from a unique angle: technical, procedural, and philosophical. Taken together, they sketch a landscape of possibilities.
OpenAI’s Why language models hallucinate presents the view that not all questions have answers, and so models should be trained with an incentive to abstain rather than answer confidently.
One challenge remains stubbornly hard to fully solve: hallucinations. By this we mean instances where a model confidently generates an answer that isn’t true.
Most evaluations measure model performance in a way that encourages guessing rather than honesty about uncertainty.
Penalize confident errors more than you penalize uncertainty, and give partial credit for appropriate expressions of uncertainty.
Then there’s Is the LLM response wrong, or have you just failed to iterate it?, which suggests that inaccurate responses are often the result of receiving an answer too soon. If pushed further, by having the model iterate, it can examine the evidence and follow new lines of discovery, much like humans do.
But the initial response here isn’t a hallucination, it’s a mixture of conflation, incomplete discovery, and poor weighting of evidence. It looks a lot like what your average human would do when navigating a confusing information environment.
LLMs are no different. What often is deemed a “wrong” response is often merely a first pass at describing the beliefs out there. And the solution is the same: iterate the process.
Finally, there is Knowledge and memory, which suggests hallucinations will not go away because knowledge must be tied to memory. Humans feel the solidity of facts, while models lack the experiences required to ground their knowledge.
Language models don’t have memory at all, because they don’t have experiences that compound and inform each other.
Many engineers have pinned their hopes on the context window as a kind of memory, a place where “experiences” might accrue, leave useful traces. There’s certainly some utility there… but the analogy is waking up in a hotel room and finding a scratchpad full of notes that you don’t remember making… but the disorientation of that scenario should be clear.
The solid, structured memory that we use to understand what we know and don’t know — when and when not to guess — requires time, and probably also a sort of causal web, episodes and experiences all linked together.
Each of these pieces makes interesting points, and together they explain different facets of model hallucination. Models are too eager to provide an answer. There are many uncertainties and “it depends” in life. Incentivizing models to reflect this may irritate users, but it better mirrors reality.
However, these responses make clear that what we receive is only a first pass, one that should be refined by iterating, digging deeper, and pushing the model further. Perhaps this process of discovery is still not enough to create true memory, as the third author points out, but it does seem to edge closer to mimicking a brief experience.
Currently, model context is built by labeling messages as system, user, or agent. We’ve learned it would be better to create a hierarchy of significance for these categories, system messages should carry more weight and not be overridden by user messages. What if context were segmented by other dimensions, like time, so a model could build a clearer picture of what it has learned?
Humans also continue processing conversations or experiences outside the event itself. What if models pushed themselves to dig deeper without user prompting, allowing them to provide a more thoughtful answer after the interaction had ended?
There is still so much more to explore, we are far from exhausting what’s possible.
- 11 Lessons to Get Started Building AI Agents
9/6/2025
I’ve only watched one video but this seems like a promising course to get up and going in a productive way with Semantic Kernel agents.