Posts tagged with "software-developer"
- More empathy for that robot than for each other
9/15/2025
Quoting Jessica Kerr, Austin Parker, Ken Rimple and Dr. Cat Hicks from AI/LLM in Software Teams: What’s Working and What’s Next
Empathy
People will do things for AI that they won’t do for each other. They’ll check the outcomes. They’ll make things explicit. They’ll document things. They’ll add tests.… And all of these things help people, but we weren’t willing to help the people. It’s almost like we have more empathy for that robot than for each other… we can imagine that the AI really doesn’t know this stuff and really needs this information. And at some level, we can’t actually imagine a human that doesn’t know what we do.
This comparison creates a visceral blow, because I feel it describes me. I consider myself a fairly empathic person, but I’m slow to create this information for other humans, yet find myself more willing to do so for AI.
Why do we behave this way? Here are some theories. Different expectation levels, ex: AI doesn’t have this background knowledge, but humans should, or at least can figure it out. Comparison and competition between ourselves and others. The impact is immediate when working with the AI, but unknown and in the future for humans. More self-serving when providing these to the AI, at least in the near term again.
Even with these plausible explanations, I can’t quite get myself off the hook. This nagging self-awareness, however, doesn’t diminish my fear that my behavior will remain unchanged.
Participation
Another topic of this interview deserves mention:
…have a training data problems, right? And we can question what we use it for, but it’s very difficult to do that if you sit outside of it. If you set yourself apart, you have to participate.
I do think that is incumbent upon us to grapple with, you know, the reality we’re faced with… We have the universal function approximator finally and there’s no putting that toothpaste back in the tube, so we can figure out how to build empathetic systems of people and technology that are humanistic in nature, or we can let the people whose moral compass orients slightly towards their bank account make those decisions, and I know which side of it I’m on.
AI is here and it will change a lot of things. It’s understandable to be worried about the negative impact of AI, but letting that prevent you from engaging is a way of sitting on the sidelines. Instead, we have a duty to participate and shape its future.
- What would you say… you do here?
9/12/2025
A constant complaint I’ve heard from software developers is that there isn’t a product owner. No one is creating requirements, no one is curating the backlog. Instead the software delivery team attempts to suss out how applications and platforms are to be built. Fair enough, it’s not very efficient to be given a high level description of something and then have to determine what it means.
I’m not going to analyze or offer my thoughts on this predicament, but I was reflecting on it while considering my current software development workflow:
- Receive a high level feature request
- Use AI to create detailed requirements based on the request and the state of the current application
- Edit the generated requirements
- Provide the final version of the requirements to an AI agent to implement
- …
If the desires of software developers were fulfilled then pristine requirements would be created by a product owner. Software developers would then hand off the requirements to an AI agent for implementation, thus making software developers then new Tom, the product manager from Office Space.
What would you say… you do here?