My brother Javier is an accountant. He runs his own practice and handles the books for our family’s restaurants. He’s also building a web app that analyzes restaurant financials — and I’ve been helping him with the development side, using AI to accelerate the process.
Last week, we spent an hour on a Google Meet trying to get the app’s Google Sheets export working. It took five failures before we got it right. But what happened during that hour taught both of us something important about how AI-assisted troubleshooting actually works — and why most people give up too early.
The Setup
Javier had added four export buttons to the app: PDF, Excel, CSV, and Google Sheets. Three of them worked fine. The Google Sheets button didn’t. It needed the app to authenticate with Google’s APIs, and that meant wading into OAuth configuration, API permissions, and environment setup.
We used an AI coding assistant to guide us through the process. And this is where it gets interesting — not because the AI got it right the first time, but because it didn’t.
Five Failures
Here’s what actually happened, step by step.
Failure 1: The app didn’t have the credentials. The Google client ID was stored as an environment variable in AWS Amplify, but the current deployment had been built before we added the variable. The app literally didn’t know the credentials existed. Fix: redeploy.
Failure 2: “Error 400: access_denied.” After redeploying, we tried again. Google told us the app was still in development mode and Javier’s account wasn’t listed as a test user. We had to navigate to the Google Cloud Console, find the OAuth consent screen (which Google had reorganized since the AI’s documentation was written), and add Javier’s email as a test user. The menus had moved. We had to hunt.
Failure 3: Authentication passed, but authorization failed. This was the moment I got to explain something to Javier that trips up even experienced developers: authentication and authorization are different things. Authentication means Google knows who you are. Authorization means Google knows what you’re allowed to do. We’d passed the first gate but not the second. We cleared the cached tokens from local storage and tried again.
Failure 4: Missing OAuth scope. Still didn’t work. The AI dug deeper and found that when Javier first authorized the app, it never requested permission to write to Google Sheets. The authorization token was missing the spreadsheet scope entirely. We created a pull request with the fix — a scope check that would prompt re-authorization if the necessary permissions were missing.
Failure 5: The Google Sheets API wasn’t enabled. Even after fixing the scope, it failed. This time, the AI suggested checking whether the Google Sheets API was actually turned on in the Google Cloud Console. It wasn’t. We enabled it, along with the Google Drive API, and tried one more time.
It worked. The Google Sheet exported perfectly, with each report on a separate tab. One hour from first error to working feature.
The Misconception That Almost Made Us Quit
After we got it working, Javier said something that stuck with me. He told me that when the first fix didn’t work, his instinct was to think the AI had failed — that if the AI didn’t know the answer on the first try, then the problem was unsolvable.
His words: “As somebody that doesn’t know that’s what you’re supposed to do, when I got to that first failure, I was like — well, if the AI doesn’t know it, how do we fix this?”
Then he paused and corrected himself. He’d realized something during the process: “It’s not that it doesn’t know it. It’s that it’s giving you the most common problems first. And then if that didn’t fix it, you’re going through the needier and more unique problems until you eventually get through it.”
That realization is, I think, the single most important thing to understand about working with AI. And most people never get there because they stop after the first failure.
AI Troubleshoots Like a Decision Tree
Here’s the mental model that changed how I think about AI-assisted debugging.
When you give the AI an error message, it doesn’t have a crystal ball. It can’t see your entire system. What it does is make an educated guess about the most likely cause, based on patterns it has seen across millions of similar situations.
If that guess is right, great — you’re done. If it’s wrong, the new error message (or the absence of a change) gives the AI fresh information. Now it can eliminate one branch of the decision tree and move to the next most likely cause.
This is why the process isn’t linear. We didn’t work through a checklist of five items. Each failure changed the landscape. The error after adding test users was different from the error before — and that difference told the AI we’d moved past one problem and into a new one. The AI recognized the progression and adjusted.
It couldn’t have given us all five steps up front because the path branched at every point. If the first fix had worked, steps 2 through 5 wouldn’t have existed. If failure 3 had produced a different error, the fix might have been something else entirely. The decision tree was too wide to present all at once. So the AI did the only reasonable thing: it gave us the next step, waited for the result, and adapted.
What to Do When the AI Gets Stuck
The AI doesn’t always get it right. Sometimes it starts suggesting the same things again, or goes down a path that clearly isn’t working. I’ve learned to recognize when this happens and intervene with a specific strategy.
When the AI spins its wheels, I ask it to make the problem more visible. Concretely, that means asking it to log the responses at each step of the process, give me experiments that would validate or disprove its current hypothesis, or break an aggregated result into its components so we can see which specific part is wrong.
This is the same approach you’d use debugging by hand. If you can’t figure out why your total revenue number is off, you break it into its components and check each one. If you can’t figure out why an integration is failing, you log what happens at each step until you find the step where it breaks.
The AI is good at this once you point it in the right direction. It can modify the code to add logging, re-run the process, and then analyze the output. But it often won’t do this on its own — you have to ask for it. The skill isn’t knowing the answer. The skill is knowing how to narrow down the problem until the answer becomes obvious.
I explained this to Javier using an example from his own accounting work. He’d had a situation where the total revenue didn’t match what he expected. He couldn’t figure out why just by looking at the aggregate number. But once he broke it down into the individual revenue streams and checked each one, the error jumped out at him. Debugging software with AI works exactly the same way. If the AI only has the aggregated result, it’s guessing. If you give it the breakdown, it can pinpoint the issue.
Why This Matters
Javier put it bluntly: “With my knowledge, it would have taken an unlimited amount of time to figure that out.”
He’s not exaggerating. The integration we fixed in one hour touched environment variables in AWS Amplify, OAuth consent screens in Google Cloud Console, local storage tokens in the browser, API scope configurations, and API enablement settings — all across different platforms with different interfaces. Without the AI guiding each step, a non-developer would have no idea where to even start.
But the AI didn’t magically solve it either. It got us there through a process of iterative narrowing — trying the most likely fix, observing the result, and using that result to determine the next step. The human’s job was to keep feeding it information, recognize when it was stuck, and help it see the problem more clearly.
That’s the real skill of AI-assisted development: not knowing all the answers, but knowing how to keep the conversation going until you find them. The people who give up after the first suggestion miss this entirely. The people who learn to push through — feeding back errors, asking for logs, breaking problems into components — end up solving things that would have been impossible for them alone.
The Takeaway
If I could distill everything from that one-hour debugging session into advice for someone using AI to build software, it would be three things.
First, don’t expect the AI to get it right on the first try. Especially with integrations, configurations, and anything that depends on external systems. The AI is making its best guess. Your job is to tell it what happened so it can make a better one.
Second, when the AI gets stuck, help it see. Ask for logs. Ask for experiments. Break aggregated data into components. The AI can’t troubleshoot what it can’t observe — and often it won’t think to add observability on its own.
Third, treat each failure as information, not defeat. Every error message is a signal. It tells you what did work (everything before the error) and what didn’t (the specific thing that failed). Five failures isn’t a sign that the approach is broken. It’s five pieces of evidence that narrow the problem until there’s only one possibility left.
The AI is a partner in this process. It’s not omniscient, and it’s not useless. It’s something in between — and learning to work in that space is what makes the difference.