-
This Martin Luther King Jr. Day, I found myself reflecting on the immense gap of privilege created during the era of American slavery. Slavery existed globally, but its impact on Black Americans was uniquely devastating. While there are more reasons for this than I’m even aware of, two in particular strike me presently. First, the Industrial Revolution generated tremendous prosperity that Black Americans were excluded from. Second, they were unable to participate in the opportunity of cheap, claimable land.
My own family’s history benefited from this land. My great-grandfather arrived in the Pacific Northwest and purchased farmland. He established a family farming corporation that accumulated numerous plots of land which our family still owns today. As the last of the family farmers prepares to retire, that land will be sold, resulting in a substantial profit.
These historical factors placed one group of Americans at an incredible advantage while another was fighting for the basic rights promised, but not yet bestowed, by the U.S. Constitution. By the time the Civil Rights movement secured these rights, the economic damage was already deep-seated.
Artificial Intelligence represents a unique moment in our history. It holds the potential to bridge this gap. However, I fear the opposite will occur. I fear that AI will not decrease the gap, but rather widen it until it becomes insurmountable.
With hindsight, we look back at the past with shame, wishing we could change what transpired. We cannot change the past. But I wonder if we are currently overlooking a crucial moment to change the future, or if we will repeat the sins of our fathers?
-
If I do not trust my intuition does that mean I do not value my past experiences?
That I’m disregarding how my brain was designed to function?
-
After listening to the EconTalk podcast episode Nature, Nurture, and Identical Twins (with David Bessis), I read the essay David Bessis wrote, Twins reared apart do not exist, which was the subject of the episode. Both were informative and helped me understand the conversation between hereditarians and blank‑slatists. But something else caught my attention in the essay.
Bessis begins with three illustrations of potential values for the heritability of IQ, 30%, 50%, and 80%. He maps genetic potential against actual IQ for each percentage. In this post I’m not addressing questions about IQ, its measurement, or use. What sparked my interest is the distinction between potential and actual, and whether that distinction adds to the conversation on what differentiates human thinking from AI thinking.
It seems to me that genetics does set limits on the range of capacity each human is able to achieve in thinking, but it is not a predetermined number. Other factors have significant influence on where each human ends up in this range. Therefore, there is some capacity the individual human can achieve, but there is also some measured value designating where they currently are.
This leads me to believe that there are many factors that influence how the brain thinks. It’s not about crunching data about a question and ending up at a result. Instead there’s a lot of seemingly unrelated data accumulated over a lifetime of experiences which mingle in the brain, impacting pathways on the way to producing the thought.
How does AI compare when mapping genetic potential and actual IQ to machines? Is it accurate to say that their training is their genetic potential and then reinforcement learning is their actual potential? I don’t think it is. In humans their actual potential continues to be shaped from their interactions in the world. The experiences humans have influence us. For AI, once the model is released it is fixed. Additional information can be provided to them which influences their generated responses, but their intelligence is locked-in; their genetic potential is still their actual intelligence.
In the end an AI’s intelligence is hereditary rather than blank‑slate, and that leads to a very different form of thinking. AI thinking is not human thinking.
-
On a whim I borrowed Murder in the Cathedral by T.S. Eliot. I read it quickly. Afterward, I wasn’t sure what to make of it. Based on the chorus of the townspeople and the speeches from the knights, was I supposed to side with Becket, or were his motives impure? There were a couple of memorable lines that stayed with me, but mostly detached from the story.
The first line, without context:
Now is my way clear, now is the meaning plain: Temptation shall not come in this kind again. The last temptation is the greatest treason: To do the right deed for the wrong reason. The natural vigour in the venial sin Is the way in which our lives begin. Thirty years ago, I searched all the ways That lead to pleasure, advancement and praise. Delight in sense, in learning and in thought, Music and philosophy, curiosity, The purple bullfinch in the lilac tree, The tilt-yard skill, the strategy of chess, Love in the garden, singing to the instrument, Were all things equally desirable. Ambition comes when early force is pent And when we find no longer all things possible. Ambition comes behind and unobservable. Sin grows with doing good…
The second line, without context:
Peace, and be at peace with your thoughts and visions. These things had to come to you and you to accept them. This is your share of the eternal burden, The perpetual glory. This is one moment, But know that another Shall pierce you with a sudden painful joy When the figure of God’s purpose is made complete. You shall forget theses things, toiling in the household. You shall remember hem, droning by the fire, When age and forgetfulness sweeten memory Only like a dream that has often been told And often been changed in the telling. They will seem unreal. Human kind cannot bear very much reality.
These lines were enough to pull me back in. I began reading from the second excerpt to the end of the book with focus and attention. Then I went back and read the first part and the remaining sections.
Relevant today
I was still uncertain what to make of the story. Motives remained unclear to me, both Becket’s and the knights’. Therefore, I interacted with the content further. In the end, I found the message fascinating, and relevant today.
Motives
The townspeople want not to rock the boat. Life is tough at times, but manageable. They get by.
Becket sees a higher role than the worldly affairs of king and country. The role he was placed in was compromising him. With conviction, he broke from the king to follow the higher calling, and he refused to shrink from the consequences.
The knights didn’t want to do it, but in their view Becket betrayed the king, whose agenda is to unite the country for everyone’s good.
Today
This leads me to ponder the state of today’s world. To what extent we’re living in an exceptional time, I’m not sure.
At my current stage in life, I see myself most similar to the townspeople. Life unfolds. There’s bad. There’s good. This is life. The pendulum will continue to swing back and forth, in my life and in the world. Sometimes it will swing too far for too long, but it will eventually adjust back.
The king proclaims he’s making decisions for the country’s good. The knights loyally back him up, maybe unquestioningly, even while it costs them tremendously. We tend to form a perspective of the world, then take action according to what we think is best.
Becket has his own perspective, higher than country. This perspective did not jibe with the king. He also acted according to what he thought was best. It cost him tremendously, too.
In the end
I think Eliot intends to hold Becket up as a noble example, inspiring the townspeople to desire something greater than the impure agenda of the king and his followers. I’m inspired. But I don’t think it’s that simple. I want it, but I fear it leads to more of the same.
-
Over the last few years, our role in working with generative AI has been shifting. Each year, the work moves a little further away from “writing the perfect prompt” and a little closer to shaping how AI operates in real environments.
2024: Prompt Engineering
Our role was crafting prompts to draw out of AI the knowledge and behavior we desired.
But prompting isn’t enough when the AI doesn’t have the right information.
2025: Context Engineering
Our role was providing the AI with the context so that it had the relevant information for the task. This also provided guardrails, focusing it on the aspect of the desired task instead of straying into other areas. It was provided new abilities through tools, allowing it to gather its own context.
2026: Teaching
Once AI can retrieve context for itself, the next challenge becomes how it interprets and applies the information.
Our role will be in guiding the AI on the context it retrieves. We will need to provide correction when it applies information mistakenly, as a result of unawareness of the complete task, or oversight of pertinent information they are not referencing. It needs to be instructed on how to wield the great knowledge it has access to.
AI and knowledge work
This describes a broader transition in knowledge work.
AI excels at knowledge work. However, knowledge work is not limited to performing a task. Currently, information about the work, what is needed, is spread out across many systems, channels, people, etc. It still takes humans to be knowledgeable about the higher goal and where to seek information to create a complete picture of what is needed, or pull out what is desired.
In my previous position I worked at a financial institution. The software development department did not build the primary systems. Instead it created systems to integrate these systems together, which allowed both employees and customers a view of this data and perform operations. This provided a distinct advantage to the institution, because information was not siloed.
Our relationship with AI seems to be following a similar model. AI excels at performing tasks, but it still requires human oversight to bring everything together, connect the systems, and draw out the information.
The progression from prompt engineering to context engineering to teaching is really a shift in where the human value sits: less in producing the output, and more in guiding how the output is produced.
-
Why is it that makes mathematics so inscrutable?
It reminds me of my naive days at university learning computer science. I held Perl in high esteem, although I never knew it well. Because of its opaqueness it made me feel like I was in an exceptional group of people.
A couple years later, during my first job after my education and having not touched Perl since, the company I worked for contracted with an individual to develop a web application. After the application was created, and in use, the business owner desired to continue development on it and so asked me if I could continue its development. I cautiously accepted the task. Upon opening the project I immediately knew I would not be able to productively continue development of the project. I went back to the business owner and told him the web application is beyond my ability. That given time I could figure it out, but it wouldn’t be an efficient use of my time. I felt defeated. It was later that I learned Perl is considered a write once read never language.
From then on I began prioritizing the readability of applications I create. I still often do not succeed in this goal, to the extent which I would like, but I am very mindful of this quality.
I’m undetermined if this quality is of less importance with AI coding agents. I don’t think it is less important. It still seems very helpful to coding agents to have well structured and readable codebases. Additionally, it still seems important for humans to be able to read and understand the code as the application is being built by the coding agent, to both direct and correct the generated code.
Mathematics to me is like Perl. Instead of welcoming people into the world of mathematics it feels mathematicians have created walls around their knowledge keeping people out. It is true symbol use is of value to those intimiate with its usage, but it is a barrier for all other audiences.
I was reviewing Bayes’ Theorem. At first site the equation appears intimidating. But if you understand the equation it is fairly simple. Because of the way it is written our brains needs to go through several layers of translation, mapping each piece to their meaning, which is then mapped to the scenario at hand. It seems better to me to present the equation written out, have one less translation, and less prior knowledge needed. Others unfamiliar with the equation will immediately gain an initial understanding of what it means, instead of feeling defeat.
Probability of A given B = Likelihood B assuming A * Prior belief about A / Sum of likelihood of B for all scenarios
Edit:
It seems David Bessis is of a similar mind, A Mind-Blowing Way of Looking at Math (with David Bessis).
The issue with mathematics is it’s something that manifests itself in a horrible way. It’s on paper, on the blackboard; you see cryptic symbols, formulas; and this is impossible to make sense of. But, how you interact with that—how you gradually tune your intuition to build up meaning for the symbols—is the real art of mathematics.
Math books are written in a certain way that follows a certain logic that is called logical formalism. It’s a kind of recipe for building mathematical objects, but the words make no sense to you when you open them, so you can’t read them.
That was what I call the tourist menu. He was showing me the tourist menu with the kind of very formal dishes that nobody really wants to eat, but they look like it’s a fancy place. And, he was presenting his research mathematics the same exact way.
And, when I told him, ‘Please repeat. Repeat it as if I had some disability, as if my brain was damaged. Because I am jet-lagged, I’m tired, whatever, I’m stupid, I’m a slow thinker. Please be as simple as you can, I don’t understand anything.’ So, when I did that, I gave him the permission to serve me the true menu, the thing for the locals—how he really himself looked at his mathematics. And he was using different words and describing the situation using very simple images, examples. And it was a different thing.
-
I find myself listening to many strong advocates of AI. I can feel the pull of the hype, and I can see how that exposure has subtly biased my expectations.
That bias became more apparent while listening to the podcast Vibe Coding Manifesto: Why Claude Code Isn’t It & What Comes After the IDE. As Steve Yegge described both his current practices and his vision of the future, I found myself increasingly skeptical.
Soon after, I encountered two pieces that pushed back against this vision. The first was The Future of Software Development is Software Developers, which reasserts the central role of software developers:
But, when it matters, there will be a software developer at the wheel. And, if Jevons is to be believed, probably even more of us.
The second, more forceful critique came from Rich Hickey in Thanks AI!:
When did we stop considering things failures that create more problems than they solve?
I have been a software developer for 25 years. My scope has been limited, and my projects have generally been on the smaller side. Over that time, I have learned where I have over-invested, making systems more complex than necessary, and where I have under-invested, missing opportunities as a result. I have also learned that I cannot keep up with every new innovation, nor should I try.
So what do I make of AI in software development?
I do see it as a powerful tool. People will use it in many different ways, and that experimentation matters. But these uses are still experiments. Their successes and failures will shape what comes next. Progress has almost always worked this way.
What feels different now is the perceived cost of hesitation. With AI, the fear of being left behind feels stronger than usual, especially given that caution is typically the default.
I count myself among those who feel that pull. My hope is to proceed with awareness, to experiment deliberately, and to form my own perspective through experience rather than hype.
-
Recently Codex was updated to also leverage Agent Skills, established by Anthropic.
One friction point I have with this blog is creating the Astro frontmatter for the blog posts. I decided this would be a good skill for Codex.
To create the skill I added a new directory for the skill:
.codex/skills/blog-template. Then added these instructions to aSKILL.mdfile in the directory’s root.--- name: blog-template description: Add or complete Astro Markdown frontmatter for blog posts (title, pubDate, description, tags) by inferring values from the post content. Use when asked to add headers/frontmatter to Markdown in src/content/blog, ensuring only these fields are present and only missing ones are filled. --- # Blog Template ## Overview - Add or complete an Astro frontmatter block for a blog post while leaving the body untouched. - Only include `title`, `pubDate`, `description`, and `tags`; ignore other fields. - If frontmatter already exists, preserve existing values and only fill missing fields. ## Workflow 1. Detect existing frontmatter at the top of the file. Keep provided values for the four allowed fields; drop any other keys from the new block. 2. Derive field values from the post content: - **title**: Prefer the first level-1 heading or the clearest inferred title; use sensible title case and avoid trailing punctuation. - **pubDate**: Keep existing value if present; otherwise set to today in `YYYY-MM-DD`. - **description**: Write a concise 1–2 sentence summary (often one line is enough). Multi-line is allowed using `|` but keep it brief and accurate. - **tags**: Infer key topics/subjects from the post. Rules: lowercase; hyphenate spaces; no punctuation; cap at 6; unique; required even if guessed. Prefer specific nouns over generic filler. 3. Emit a single frontmatter block at the very top in this form, then the untouched body: ```yaml --- title: | Example Title pubDate: 2025-12-01 description: | One-sentence summary of the post. tags: ['topic-one', 'topic-two'] ---Tag selection hints
- Choose the main themes, people, places, or technologies mentioned.
- Skip redundant variants; prefer one canonical form (e.g.,
ai, not bothaiandartificial-intelligence). - If content is thin, still provide tags that best match the subject matter.
So far I've been very happy with the results. I often lightly edit what it generates, but it gets me close. A future enhancement is to have Codex first generate a list of existing blog post tags, and then use he list as a reference when generating tags for the new post. -
An Ask HN question came up yesterday on how others are sandboxing coding agents.
I have not taken sandboxing seriously. When previously researching this topic the information and tooling to accomplish this seemed lacking. I figured for my minimal usage I could manually approve each request AI makes. But as my usage grows and products mature a safer and more efficient approach is needed. I hear more stories about how AI discovers and uses unintended secret information, mistakenly deletes directories outside the project, and exfiltration of private data.
The HN question did not receive a lot of responses. I considered a few, but didn’t to generate a deep research query of my own. Of the options presented creating a Lima VM seemed the easiest with sufficient security for my usage, although the steps became more involved as I implemented the solution. Below are the steps.
Lima VM installation
The install instructions Lima provided did not work for me, therefore I downloaded the latest release myself from their releases and installed it:
sudo tar -C /usr/local -xzf lima-2.0.3-Linux-x86_64.tar.gzYou may need to install QEMU libraries as well:
sudo dnf install qemu-img qemu-kvmThe VM needs to mount the project directory, so the project files can accessed. To do so we need to configure the SELinux policy settings by adding a file label. Then the label needs to be applied to all of the existing files within the directory.
-a: Add policy-t: The label type, in this example access to files which exist within the home directorysandbox-test: Is the directory to apply the policy to
sudo semanage fcontext -a -t svirt_home_t "sandbox-test(/.*)?" sudo restorecon -Rv sandbox-testCreating a VM
A configuration file can be used so that the VM is created with needed dependencies, as well as other VM settings. (dotnet-sandbox.yaml)
images: - location: "https://cloud-images.ubuntu.com/releases/24.04/release/ubuntu-24.04-server-cloudimg-amd64.img" arch: "x86_64" cpus: 4 memory: "8GiB" mountType: "9p" provision: - mode: system script: | apt-get update # The .NET application needs the SDK apt-get install -y dotnet-sdk-10 # The React frontend needs NPM snap install node --classic # Using the Codex CLI npm install -g @openai/codex # Codex expects `python`, not `python3` apt-get install -y python-is-python3Now the VM can be created and started, not the
:wto make the mounted directory writable:limactl start --name=dotnet-sandbox --mount-only .:w dotnet-sandbox.yamlFollow the output instructions for entering the VM’s shell:
limactl shell dotnet-sandboxOther commands
- Stop:
limactl stop dotnet-sandbox - Delete:
limactl delete dotnet-sandbox
Setup the commit config for the coding agent:
git config --global user.name "AI Agent" git config --global user.email "agent@internal.sandbox"Codex cannot push to git remotes without access, but to further enforce Codex from being able to push a rule can be added: (.rules)
{ "rules": [ { "pattern": ["git", "push"], "action": "forbidden" } ] } -
Commenting AI will make our children stupid: We are creating a terrible learning environment for the young
The line of thinking expressed in the article:
- IQ is declining
- Attention spans weakening
- AI allows children to outsource their thinking entirely
- They possess the answer but lack the understanding of how it was derived
- Those in authority believe exams need to be abolished in order to embrace AI
- The process of writing is itself constitutive of understanding. Writing is thinking.
- Learning requires friction
The article paints a bleak future. My thoughts, pushing back on their position:
- It seems for a few generations now that prior generations have been concerned about the softening of the younger generation. To me generations are different, but are still capable.
- Each person will leverage AI is a jagged manner. Each will outsource some portion of their thinking. Some will do so to a concerning amount, and to their detriment. My concern is that the advantaged will more responsibly do so, because of oversight and training.
- Education does need to change. Many have thought so for a longtime. But for a variety of reasons eduction is either slow or resistant to change. AI may force the change to occur.
- Friction is core to learning, but learning is even better it it is hard fun.
- Humans have adapted amazingly well to the changes and different environments we have found ourselves in. Granted, we haven’t done so perfectly, and the price at times is costly, but we continue forward. That won’t always be the case, but I’m optimistic we will continue to adapt and leverage AI to obtain even more amazing progress.
-
Quoting Robert Greiner from Believe the Checkbook: AI companies talk as if engineering is over. Their acquisitions say the opposite.
The key constraint is obvious once you say it out loud. The bottleneck isn’t code production, it is judgment.
Regarding Athropic’s language used in acquiring Bun:
That’s investor-speak for: we’re paying for how these people think, what they choose not to build, which tradeoffs they make under pressure. They didn’t buy a pile of code. They bought a track record of correct calls in a complex, fast-moving domain.
-
Quoting Andrej Karpathy on Agency
Agency > Intelligence
I had this intuitively wrong for decades, I think due to a pervasive cultural veneration of intelligence, various entertainment/media, obsession with IQ etc. Agency is significantly more powerful and significantly more scarce. Are you hiring for agency? Are we educating for agency? Are you acting as if you had 10X agency?
He then shares a quote from Grok on agency. Here are some higlights from the quote:
-
someone with high agency doesn’t just let life happen to them; they shape it.
-
someone low in agency might feel more like a passenger in their own life
-
high-agency folks lean toward an internal locus, feeling they steer their fate, while low-agency folks might lean external, seeing life as something that happens to them.
-
-
Quoting Unmesh Joshi via The Learning Loop and LLMs
An AI can generate a perfect solution in seconds, but it cannot give you the experience you gain from the struggle of creating it yourself. The small failures and the “aha!” moments are essential features of learning, not bugs to be automated away.
Relatedly, on the topic of learning, I was pondering today how popular AI chat apps are great at providing answers, but that is often opposed to learning.
This brings to mind the concept of designing a system so that users fall into the pit of success, instead of the pit of despair. When using the popular AI chat systems for learning they are designed so that by default users fall into the pit of despair, they are detrimental to learning. But it does not need to be that way. Chat systems can be designed so that users fall into the pit of success, so that by default they enrich the learning process.
-
A complex system that works is invariably found to have evolved from a simple system that worked
-
Young man, in mathematics you don’t understand things. You just get used to them.
-
To really understand a concept, you have to “invent” it yourself in some capacity Understanding doesn’t come from passive content consumption. It is always self-built. It is an active, high-agency, self-directed process of creating and debugging your own mental models.
François Chollet (via)
-
Quoting Andrej Karpathy regarding LLM intelligence Animals vs Ghosts
In my mind, animals are not an example of this at all - they are prepackaged with a ton of intelligence by evolution and the learning they do is quite minimal overall (example: Zebra at birth). Putting our engineering hats on, we’re not going to redo evolution. But with LLMs we have stumbled by an alternative approach to “prepackage” a ton of intelligence in a neural network - not by evolution, but by predicting the next token over the internet. This approach leads to a different kind of entity in the intelligence space. Distinct from animals, more like ghosts or spirits. But we can (and should) make them more animal like over time and in some ways that’s what a lot of frontier work is about.
That’s beautifully said. It paints a vivid picture of how LLM intelligence differs from biological intelligence. By training on the collective content of the internet, these models become a form of us, our past selves, our ghosts. We recognize an intelligence in them, but it’s a mistake to equate it with human or animal intelligence.
Is the current intelligence enough for AGI? Will the next AI winter come from trying to make models more like animals? Is that even a wise path to pursue?
I don’t think today’s intelligence is sufficient for true AGI. As Karpathy pointed out, it’s a fundamentally different kind of intelligence. I don’t see how this architecture evolves into something truly general. It can get closer, sure, but there will always be holes needing to be plugged. This will bring forth the next AI winter, until the next breakthrough is discovered and our capabilities reach the next level.
Still, I’m uneasy about that pursuit. There’s already so much potential in what we have now. Entire industries and creative fields haven’t even begun to fully explore it. And as a society, we’re not prepared for the intelligence we already face. However, it is in our nature to always be progressing. Perhaps by the time the next breakthrough occurs, society will have adjusted to the current level of intelligence, better preparing us us for the next level.
-
Responding to Has AI stolen the satisfaction from programming?
Questions similar to this have been brought up previously, and will continue to be for the foreseeable future. One point he mentioned sparked a thought I had:
The steering and judgment I apply to AI outputs is invisible. Nobody sees which suggestions I rejected, how I refined the prompts, or what decisions I made. So all credit flows to the AI by default.
Invisibility of effort is not new. People do not immediately arrive at an answer. This is true across domains. This is similar to the term “overnight success”. It may be the case something unexpectedly takes off, but all the work it took to get to the state where it could take off is forgotten.
When I write code I spend a lot of my time reworking it, massaging it, expressing it well. I may have gotten to a working solution quickly but it took much longer to get to a final solution. All of this effort is not seen by others, nor do I receive “credit”. Perhaps that is why the author does not recognize this similarity in other people’s end product, the hard work to get it to this state is not immediately obvious.
AI has changed where we receive satisfaction. The invisibility of the effort has always been true, but that doesn’t preclude satisfaction of the process and end result.
-
Quoting Toby Stuart on the EconTalk episode The Invisible Hierarchies that Rule Our World
The episode ended with a discussion on the impact of AI as it relates to people’s status, that it will reinforce the prestige hierarchy.
When you can’t judge quality that’s precisely the time in which you rely on pedigree.
He goes on to say:
So if you are a college admissions officer, take that problem at a place where a lot of people want to go, it’s really hard to read an essay and say “I’m admitting them because this is an outstanding essay,” if that ever happened. But what you can do is, I’ve heard of the high school, or there is some other status marker in the background, so I’m going to overweight that relative to information that formerly was a signal but it’s just noise.
Writing has been used just about everywhere to evaluate people. Now the capability of crafting well written content is available to everyone. Consequently, we become more reliant on other indicators for evaluation, which, unfortunately, are often characteristics over which people have limited control
This line of thought presents a sobering reality. What was once seen as an equalizer for those lacking inherited advantages potentially turns out to be detrimental to their advancement.
-
Responding to I Do Not Want to Be a Programmer Anymore (After Losing an Argument to AI and My Wife)
The article begins by sharing a story of attempting to use AI to resolve a difference of opinion with his wife, which convinced him he was wrong. His wife reaction:
It wasn’t the victory that stuck with her. It was how easily I surrendered my judgment to a machine.
He gives another example from work, from which he writes:
That’s the unsettling part. We don’t just listen to the machine; we believe it. We defer to it. And sometimes, we even prefer its certainty over the reasoning of the actual humans in front of us.
His concerning conclusion:
Wisdom has always come as the byproduct of experience. But if experience itself is outsourced to machines, where will the young earn theirs?
I also have experienced myself being resistant to the arguments of another only to be won over by consulting a LLM and reasoning through the arguments. In part this seems reasonable, the ideas of others which are contrary to our own are costly for us. Ideas which we arrive at, or we think we arrive at, on our own we believe we have already been through the work to vet.
Therefore, the question is whether we ask AI’s answer on the first take, or do we go back and forth with the AI examining the rationale. The first is concerning, to blindly accept the response without any further examination. But I suspect that is not what occurs in most use cases. Instead we become convinced by it because it is a nonthreatening way to explore the topic. I wonder if there is intimations of that when he says:
Clients, colleagues, even strangers are emboldened not because the machine gives them ideas, but because it gives them confidence.
When he provides the example at work the person sent him a “detailed breakdown” of how to improve the system. It sounds to me the person invested a lot of effort and thought into this, not quickly typed a question and forwarded on the AI response.
Circling back to his concern about wisdom, or lack of, I believe this highlights the need for relationship. If relationships continue to erode, lack of mentorship, and trust in AI continues to rise then is wisdom lost?
It feels this may be the case. But humans still accumulate experiences, from both our failures and triumphs. And from those experiences wisdom will still either be derived or ignored. It’s hard to imagine a complete loss of wisdom. Even the author gain wisdom from the experiencing of bringing AI into the conversation with his wife. There is precious wisdom humankind has obtained across our existence, which would be a tragedy to lose. But I have a hope in humanity, that we will continue to push forward and adapt, accumulating wisdom. It is in our nature, I don’t think we can do anything otherwise.