Basic analytics skills in the dawn of agentic whatever
On my YouTube channel, the video "How I would start as a Data Consultant - if I could press Restart" is the most watched video with over 7,000 views (which is a lot for my channel). I have published it in October 2022. So 3 years ago. It does not mention AI in any way, because it did not matter then. But it does now. Time for an update. Even when it is a bit more meta than the first one.
A lot of people trying to predict how AI will change data work. Will analysts disappear? Will data engineers become obsolete?
I don't have those answers. Nobody does.
Quick break.
Before we start, I am running a free workshop in one week:
From Product Analytics to Growth Intelligence: The Metrics That Actually Explain Growth
We still have 15 seats open. Claim yours here. There will be a recording, so no need to make it to the live session.
But here's what I've learned after 12 years in this field: you don't need to predict the future to prepare for it. You just need to invest time in skills that compound regardless of which scenario plays out. This is what this post is about.
Let me be direct about something. The real threat isn't AI replacing data analysts, engineers. The real threat is simpler: you can't articulate your value beyond execution.
We've all fear that moment. You imagine walking into a meeting where leadership announces the data team is getting "rightsized" - five or six people cut because the company is "investing in automation." You could think to yourself, "Yeah, good luck with your data quality after that." But that won't save you. Management makes decisions based on incomplete information all the time.
The uncomfortable truth: if you can't explain why your work matters beyond building pipelines and writing queries, someone else decides your value for you (I know, the stupid v-word again).
So what actually makes you hard to replace with AI whatever dangling over our futures?
I'm going to walk through two areas. Not predictions. Not guarantees. Just two investments where your time compounds no matter how the tools evolve. The first one has protected me through constant technology changes and the second made my data life already much more interesting.
Investment #1: Strategic Thinking & People Skills (The "Why" Skills)
People who can talk to people will always have more opportunities. Sorry.
If you're reading this, you probably got into data work because you like working with data more than people. You want to dig into datasets. Build pipelines. Solve analytical problems without sitting through meetings.
That's fine. That's why most of us got into this field.
But here's the problem: AI is getting good at the solo execution work. Writing queries, building pipelines, creating transformations. The work you can do alone is exactly the work that's easiest to automate.
What AI can't do - at least not yet and maybe never - is understand what people actually need. It can't navigate organizational politics. It can't take a vague request and trace it back through five layers of misinterpretation to figure out what problem someone is really trying to solve.

That requires talking to people.
I'm not saying you need to become a full-time stakeholder manager who spends all day in meetings.
You can still have periods where you focus on building. Where you work alone, dive deep into problems, do the technical work you enjoy.
But you need to be able to come out of that room when it matters.
You need to have the conversations that gather context. Ask the questions that uncover what people actually need. Understand the business problem behind the data request. Be curious to listen.
This is a capability, not a personality change. You don't need to love small talk or be naturally social. You just need to be willing to engage when the work requires it.
And the work requires it more now than it used to.
Let me show you what this looks like in practice.
Someone from marketing comes to you and says: "We need a campaign dashboard for this new initiative we're running. Just show us all the campaign performance metrics so the team can analyze what's working."
Path A - The Tactical Approach
You build what they asked for. Maybe you've done this before, so you pull up your blueprints from the last campaign dashboard. You implement it, ship it, move on to the next task.
This is fine. It's competent work. But it's also exactly the kind of work that AI is getting very good at.
Path B - The Strategic Approach
You ask: "Can you tell me more about this campaign? What's the goal?"
The marketing person gives you context: "We're trying to reach a new audience segment. We want to see if we can get this specific group of people interested in our product."
You keep going: "What would success look like for this campaign? When would you consider it very successful?"
Now you're in a dialogue. You're adding context to the problem. You're understanding it more deeply.

If this is a significant initiative, you go even broader: "Are there other people working on this campaign I should talk to? I want to make sure I understand what kinds of insights everyone needs."
Over two or three conversations, you collect significantly more context than what was in the original request.
Context is the key asset here, we get to it later again.
The Difference
In Path A, you're executing. In Path B, you're planning.
With all this context, you might design something quite different from a standard campaign dashboard. Maybe you discover they actually need to track audience segment behavior over time. Maybe you learn there's a specific conversion event they're optimizing for that wasn't mentioned in the initial request. Maybe you find out three other teams are working on related initiatives and you can build something that serves all of them.
You haven't just built a better dashboard. You've positioned yourself as someone who understands the business problem, not just the technical requirement.
And that's much harder to automate.
The "Why" Practice (Going Deeper)
This isn't just about talking to stakeholders. It's about questioning your own work.
Practice asking yourself: "Why am I building it this way?"
You're designing a data warehouse in a specific way. Maybe based on experience. Maybe copying patterns you've used before. Stop and challenge yourself: Why this approach? What assumptions am I making? What trade-offs am I accepting?
Take an hour to think through the decisions. Use an AI model as a thinking partner. Show it your design and ask it to challenge your choices.

Say you're setting up event tracking in a particular format. Ask yourself: Why am I structuring events this way? Is this genuinely the best approach for how this company works, or is this just the pattern I learned three years ago? What would change if I designed this from scratch knowing what I know about this organization?
Sometimes you'll come out reinforced - yes, it's a solid pattern for this context. But most of the time, you'll find areas where you're running on autopilot, applying patterns without understanding whether they fit.
This makes you stronger at designing systems. Your decisions become intentional rather than habitual. When someone asks you why you did it that way, you have an answer, a good and solid one.
Once you're in the habit of asking "why" about your own decisions, extend it to the work itself.
You're building a pipeline for some new tracking requirement. If you trace it back far enough, you realize you're at the end of a chain of interpretations. Someone told someone who told someone who told you this needs to be built.
By the time it reaches you, it's been through five or six layers.
Go back to the source. Talk to the person who originally needed this. Understand their actual motivation.
Most of the time, the work is justified. But what you usually discover is that the original need is slightly different from what got translated down the chain. When you understand the real motivation, you can find a better approach. Something that actually solves the problem they're facing rather than the request that emerged from the telephone game.
Sure, there are the five whys - but these are simple to say. It needs practice to become good at it.
The Compound Effect
Here's what happens when you consistently practice this kind of strategic thinking and questioning: over time, you become a different kind of data professional.
You become the person who actually understands how the company works.
You understand the processes that drive the business - not the documented ones in some wiki, but how things actually happen day-to-day. You know which teams depend on which data, who makes what decisions, where the bottlenecks are.
You can map where data has the highest impact versus where it has low impact but high potential. You're not guessing about what to prioritize - you actually know because you've had the conversations and traced the work back to its source.
You understand the growth model. You know what levers actually move the business. When someone asks for a new metric or dashboard, you can quickly assess whether this is touching something that matters or if it's noise.
This is the compound effect of always asking "why" and always gathering context.
Your solutions start to look different. They're not generic blueprints copied from best practices blogs or your last company. They fit this specific organization - how it operates, what it cares about, where it's trying to go.
You're no longer just implementing requirements. You're designing for the business.
And this is exactly the kind of value that's very hard for AI to replicate - because it requires institutional knowledge, relationships, and judgment that accumulates over time through actual human interaction.
That's the first investment: strategic thinking and people skills. Understanding context. Asking why. Positioning yourself as someone who designs solutions, not just executes tasks.
But there's a second investment.
While you're building these capabilities, the execution of data work is changing fast. If you ignore that change, you'll spend all your time on tasks that could be automated. Which makes it harder to do the strategic work that actually protects you.
The second investment is learning to work with AI as a fundamentally new skill.
Investment #2: Agentic Work as a New Skill
I need to set expectations correctly.
This is not about becoming 10x faster. You're not going to read this and suddenly ship projects in half the time.
Working with AI to build data systems is a new skill. Not a productivity hack. A genuinely new way of working that you have to learn from scratch.
The hype content will tell you AI makes you ridiculously faster. Technically, once you're good at it, maybe there are efficiency gains. But that's not the point, and it's not what you should optimize for right now.

When you're learning this skill, you will be slower than doing it the old way. Significantly slower at first. You'll get frustrated. You'll be tempted to just write the SQL yourself because it would take five minutes instead of an hour of fighting with context and prompts.
That's normal. That's what learning a new skill feels like.
What changes isn't primarily speed. It's what you build. The shape of your work. The way you approach problems.
If you go into this expecting to immediately save time, you'll quit after two frustrating experiments and decide "AI isn't ready yet." If you approach it as learning a new way to work - with all the awkwardness and time investment that requires - you'll develop a valuable capability.
Here's how to build this skill properly.
The practice method: "Tie your hand behind your back"
Pick a small, local data project. Something real but low-stakes. Not production work - you need room to experiment and fail.
Now the critical rule: you're not allowed to write any Python code. You're not allowed to write any SQL.
Zero code from you.

Not to build the pipeline. Not to fix a bug. Not to "just quickly adjust this one thing."
You can look at the outputs. You can analyze whether the results make sense. You can review what was generated. But you cannot touch the code itself.
This feels ridiculous at first. You'll want to break this rule constantly. You'll see something that would take you 30 seconds to fix and spend an hour trying to get the AI to do it correctly.
Do it anyway.
What you're learning isn't how to build pipelines faster. You're learning how to provide context. How to break down work. How to communicate requirements in a way that produces results.
If you let yourself "just fix this one thing," you're not learning that skill. You're just using AI as a slightly frustrating autocomplete.
The constraint forces you to work entirely through context provision. That's the skill that matters.
The reality of starting out
When you first try this, it doesn't work very well.
The output is okay. Not terrible. But far from what you would have created if you'd just written the code yourself.
You're working with someone who has technical skills but is completely missing the big picture. They can write code, but they don't understand the connecting pieces. They don't know your conventions. They don't have context about why you're building this or how it fits into the larger system.
You have to figure out how to give them that context.
You'll sit there thinking "I could have built this in 20 minutes, and instead I've spent two hours writing prompts and I still don't have what I need."
That frustration is the learning process.
You're discovering what context is missing. What assumptions you normally make that aren't being communicated. How much of your work relies on tacit knowledge you've never had to articulate.
If your first few attempts work perfectly, you're probably working on tasks that are too simple. The struggle is where the learning happens.
What you have to learn (Breaking down tasks)
The biggest thing I had to learn - and what made the difference between frustrating failures and actually useful outputs - was breaking down tasks into much smaller chunks and revisit my understanding of structure of tasks.
When I started, I was far too general. I'd say something like "build me a data pipeline for this source" or "create a staging layer for these tables."
That doesn't work.
The AI has skills, but it needs bounded problems. Clear scope. Specific expectations.
Let me give you a concrete example of what a well-broken-down task looks like.

The Staging Table Example
Let's say you want to create a staging model. You already have a source table loaded from somewhere - an API, a database, whatever. Now you need to create the staging layer that brings this data into your data warehouse.
A staging table does very specific, limited things:
- Takes the source data and aligns it to your naming conventions
- Casts fields to the right data types (timestamps look like timestamps, strings are strings, etc.)
- Applies basic, non-business-logic transformations
- Prepares everything so downstream models have clean, consistent inputs
This is a straightforward, well-defined concept. It's bounded - you're not asking for the entire data model, just this one specific layer for one model.
When you break it down to this level, you can write much better context. You can say: "Here's the source table. Here are our naming conventions. Here's how we handle timestamps. Create a staging model that follows these patterns."
That's specific enough to work with.
The Effect of Task Breakdown
Once I started breaking work into these smaller, bounded chunks, everything got better.
I could write clearer initial prompts and better context because I knew exactly what I was asking for.
I could develop better testing and validation approaches - because I could define clear success criteria for each small piece.
I could iterate faster - if something wasn't right, I knew exactly which bounded task to refine rather than trying to debug an entire system.
This is the skill. Not prompting. Not "talking to AI." Breaking down your work into well-defined, bounded tasks that can be tackled systematically.
What actually changes
I cannot tell you I'm now 6x or 7x faster at building data systems.
That's not how this works.
When I look at time spent on projects now versus before, there might be a small shift. Nothing dramatic. Nothing that would make you think "AI just 10x'd my productivity."
What changed is not primarily the speed. It's what I build.
When I build an analytics layer now, it looks different from what I would have created manually.
Documentation is often included by default. I go broader in some areas - exploring edge cases I would have skipped. The structure might be different because I'm thinking about the problem differently when I have to articulate it clearly enough for AI to execute.
It's hard to compare the "before" and "after" versions because the approach is different.
If I tried to time myself building a core layer manually three years ago versus using an agentic approach now, the comparison would be meaningless. Different scope. Different outputs. Different level of documentation. Different considerations.
The work I produce now has a different character. More comprehensive in some ways. More systematically documented. Sometimes more robust because I'm forced to think through edge cases when writing context rather than just handling them intuitively.
But also: requires more upfront thinking. More explicit planning. More articulation of things I used to just "know."
Is that better? Depends on the project. Depends on what you're optimizing for.
You're not learning this to save time on the exact same work you're doing now. You're learning it because it's a different way to work, and that difference will matter as the tools evolve.
But again, we don't know how this will look like in just 6-12 months. But with these two methods, you can try to stay on top of things.
If you like let me know how you approach this at the moment, by replying to this email.