Revelry

AI-Driven Custom Software Development

Text reading "hiring an image in 2026? Preposterous! "

Should you run an Apprenticeship Program in 2026? If so…how?

At Revelry, we have a long history of running software apprenticeships. The CTO is a former graduate (and so am I). But everywhere I look, hiring humans to engineer software at any level – especially at a junior level – is not in vogue. People are being laid off left and right. Every day another company sheds employees, and every day new AI tools are released which make AI easier to use (and therefore apparently make engineers easier “replace”), including OpenClaw, Goose and of course, Claude Code.

Let’s take this at face value (and not, as some say, as an excuse to shift post-COVID fat or outsource) for now. For my part, my job looks radically different than when I first signed on to Revelry as an apprentice in 2021. I use AI to help build every project I’m on, varying both how and how much I use it, depending on the project. I ask fewer questions in our Slack, I browse Stack Overflow less and I spend more time project managing and less time stuck on irritating bugs. I’m also just better at my job than I used to be – and somehow simultaneously worse, as the tools take more of the code out of my hands.

And so…is it ethical or responsible to bring a baby software engineer into this world? Is it worth it for us, and for them?

Anyone Can Cook

The Revelry apprenticeship program is very close to my heart. It’s the same program that allowed me – a furloughed green-card holder in a new city, where my previous career certifications and degrees weren’t recognized – to start a career. Over the last fourteen years, it’s given a huge number of people, from all different sorts of backgrounds all over New Orleans, the opportunity to get to grips with and master software development.

Greatness can come from anywhere, and the Revelry apprenticeship program has historically provided a great space for that to happen. Take a determined person with an understanding of what a web developer does – either a bootcamp graduate or self-taught – and you can mold them into a software engineer. Given ample time to shoulder surf, the freedom to ask questions and experiment and – most importantly – the trust to actually do the work, and you’ll see a fledgling developer emerge.

In my opinion, this is because it allows entry into what I’m going to call the “virtuous cycle of learning software”. In practical terms: you need to do stuff (call an API, write a class, fix a bug), so you learn stuff (API interfaces, best practices, N+1 footguns). This allows you to do more stuff, but then you have to learn more stuff… you get the picture. The more you learn, the more you do, and the more you do, the more you learn.

This balance – and tension – is what the advent of LLMs has thrown into complete disarray.

The 90/10 problem

It sounds so idyllic – the virtuous circle of learning and doing! It’s responsible for why the apprenticeship is successful, and is behind how people can launch themselves into this industry – successfully – without a computer science degree. So why do I refer to it as a tension?

It’s because it’s also the reason technical interviews are and have always been worthless. It explains why imposter syndrome has always been rampant in our industry. It explains why the ‘skill’ of software engineering is so liable to rust and decay, and it explains why AI is really f**king our industry up. It’s this:

You don’t actually have to know anything to do this job ~90% of the time.

All you need is the ability to find out how to do things, then do it. Software engineers need to be experts at Just In Time learning – the ability to come across a problem, learn how to figure it out, and solve it. That’s what working and learning boils down to: they are fundamentally intertwined in our day to day.

It’s also why most of the time, we just eventually forget we even solved the problem in the first place. Just like a Pokemon move; forget it when the next problem arrives.

A scene from a battle game where only four moves can be learned, so one must be forgotten
An exaggeration; i actually know up to six programming concepts at any given time

But not quite. Something remains – ancestral memory, DNA memory, whatever you want to call it. We’ll probably have to look up how to solve the problem if we come across it again, but we’ll recognize the correct patterns, the class of problem, etc. Not everything is forgotten (unlike say, that Claude chat you were having). So really I suppose it’s more accurate to say it’s JIT learning plus pattern recognition?

(If you feel like I denigrated you and all your colleagues – I do actually think JIT learning is a valuable skill, that takes lots of flexible and fluid intelligence to do and perfect.)

But this skill – simplistically referred to as “the ability to Google” – it is less valuable now. LLMs, like them or loathe them, are really good at enabling JIT learning. It’s so much easier to use than searching out those random GitHub threads from five years ago, it won’t insult you or your intelligence and it can parse the docs to find the one line you’re looking for. And if you want to complain that it will just hallucinate – how many times have you tried a fix that a “jeff012” or a “muggles333” has suggested on a GitHub issue, and it didn’t work?

And then, of course, not satisfied with optimizing this part of your job – the part where many of us actually get the satisfaction and joy – it can write your PR for you.

But there’s something missing here. I said you don’t have to know anything to do the job 90% of the time: but that last missing 10% corresponds to valuable and hard won knowledge. It’s The Real Stuff™: systems design, contextual awareness of whatever industry we’re building for and the product, ideas around scale and connectivity. That’s not what 90% of your tickets are about – but that knowledge is what allows you to do thoughtful PR review, thoughtful ADR review; it’s what allows you to evaluate your own output, the output of other engineers, and the output of the robot. It’s the intuition you learn by osmosis through learning, doing and forgetting.

That’s the part you get from an apprenticeship program, or 5 years experience, or 10 years experience. When presented with a problem – either architectural or microscopic – it feels like all the times you solved this problem in the past guide you towards the correct solution. You don’t actively remember; sometimes it even feels like solving the problem for the first time.

And that’s the thing an LLM can’t ever do (or maybe it will – denial of future progress makes me seem like a FUD who’s destined to die off, and breathless anticipation makes me seem like a bootlicking LinkedIn business bro).

But I think that, for now and presumably a while, it is this last 10% is what separates the pros from the noobs. Anyone can cook: but you can’t instantly become a chef. That takes time.

Therefore I do think, regardless of where we end up, that businesses should continue to run apprenticeship programs.

So why run an apprenticeship in 2026?

I know that’s not enough for some readers, so bluntly: eventually, I will die. All my managers will die. All my colleagues will die – and yes, even one day you too will die, dear reader.

So who’s going to keep the software working?

If you answered robots – yes, generally correct. But (and here’s where I will risk a future prediction) there’s currently no way to get around the alignment problem. Maybe the AI will delete all your tests because they can’t be bothered to fix them. Maybe it will kill off all your users because that means no more Sentry errors. What fun!

So we need something (preferably someone) in charge who is aligned to the goal of keeping the software working *and* not killing anyone. One human in the loop – or two, or three – can stop those errors (and massacres) before they happen. But we’re all dead, remember. And we stopped training people to understand what was going on.

One day, the apprentice you didn’t hire won’t be the senior who notices that the LLM is on the warpath again and just shared the production API keys over Moltbook.

Or hire the apprentice, train the senior and sleep peacefully.

So we need fresh blood (even IBM knows that) – and more than ever, there is a need to offer future engineers hands-on training on real engineering projects, especially as the role shifts toward infrastructure, systems management, and feature planning. That’s the sort of thing that can’t be taught in a textbook.

On a non-technical note: offering an on-ramp into software engineering to passionate people is an extremely worthy goal on its own. It’s a way of giving back to the community of New Orleans, of providing an opportunity to those who might not otherwise have one, and of bringing genuinely different experience into the room. The engineers this program produces – sometimes on their second or third careers – already have a rich and diverse range of knowledge by the time they graduate the 12-week program. I think that a breadth of experience helps improve your bottom line and company culture, and I’m not the only one.

But whether you think it’s a worthy goal in and of itself – or not – you’re faced with the challenge of how to actually do it.

How to actually do it

Selecting candidates

This is your first tricky problem. How the hell do you evaluate your candidates? And do you care about cheating?

First (and a huge benefit for hiring apprentices over fully fledged engineers): Focus on fit and judgment, not output.

LLMs make the question “can you produce working code?” close to meaningless – and the output almost always looks fluent, even when it’s not correct.

Instead, for apprentice interviews, look for:

  • Can the candidate explain why they’re making a choice, but also be honest if they don’t know? Sure, you used a reduce rather than a map – but why?
  • Can the candidate notice when an answer looks right but feels wrong?
  • Can the candidate ask good questions, clarify requirements, and be honest about what they don’t know?

Practically, that means interviews that look less like puzzles and more like actual engineering scenarios: a case discussion about a realistic feature, or a short debugging exercise in an unfamiliar codebase where they narrate their thinking. If AI is allowed (it should be), require them to explain what they asked, what they accepted, what they rejected, and what they verified.

This is excellent news, because this is how Revelry already runs its hiring program for apprentices!

Ok, now you’ve interviewed them…

Teaching apprentices

Now we get to the heart of the matter. I mentioned earlier about that 10% of the job is important real knowledge – about best practices, code smells, infrastructure planning – and this has (historically) been learned via osmosis during the learning/applying/forgetting cycle. But if this isn’t what the job looks like nowadays, and the cycle is broken; how are they going to gain this intuition and learn these skills? Should we tell them not to use AI – and if we tell them to use it, how?

Let’s get it out of the way: I really don’t think the thing to do here is ban AI, and force them to ‘learn the old fashioned way”. I’ve heard the arguments, which are usually along the lines of “we should force them to write code by hand like we did” and then they will become great engineers. I think the problem here is twofold: firstly, they probably won’t listen to you, and secondly, in the grand scheme of things, three months isn’t that long in the context of their future career. AI is quickly becoming the way people learn new skills and ideas. It might not be the way that *you* do it; but it’s becoming more mainstream everyday. You might not like that, but that’s beside the point. If I’m working a problem and I need to know the order of arguments in a reduce/3, you best believe I’m asking the magic genie and not hitting the books. And if that’s what *I’m* doing, to ask the apprentice who is learning at my elbow to not do that is farcical and hypocritical.

So how do I think they should use it…while learning at the same time?

It’s time for AI apologetics, and a bit of philosophy.

A human during an apprenticeship office apologizing to another human. The apologizer has a bashful robot behind him
I apologize for this ai image of an intern apologizing (but ms paint can only get me so far)

A priori and a posteriori

A priori knowledge is knowledge that you already possess – ‘before knowledge’. And ‘a posteriori’ is the opposite – ‘after knowledge’, knowledge that must be learned. I think these two principles can be used to split tasks between “feel free to use AI” and “please don’t use AI”.

If you can solve the problem using a priori knowledge (i.e., knowledge you already possess), then don’t use AI. For example, elixir koans. The exercise contains all the knowledge you need to work through it; it’s a basic primer on how the language is structured and functions. If you need a posteriori knowledge (i.e., research) to solve the problem, ie trawling the elixir docs to figure out how a Plug works, then please use AI.

And if you want to use it to write the code for you? Do it – but make sure to apologize.

Make them AI apologists

When I say apologist, I mean in the philosophical (or theological?) sense. Sure: they can use AI to generate code (just like a regular software engineer in 2026). The difference is the training wheels – before merging, they have to explain it. They have to offer a reasoned, systematic defense of their code.

If they don’t know what the code is doing when they first generate it, that’s fine: they can look it up. But before merging, there should be a moment in time where they explain their code; either to a fellow engineer, or even to an LLM. José Valim talked about this on the Thinking Elixir podcast and built PR Quiz, a tool that takes a pull request diff and asks “which approach is better, and why?”

My hunch is that this is the first step in healing the virtuous circle – reconnecting doing with learning.

The 12-week program

We’ve previously structured our program in two phases. First, there’s a short ‘learning’ phase where they are focused on shoulder surfing, elixir koans, Exercism and working their way through our other learning resources. This is followed up with a longer ‘doing’ phase, where they still shoulder surf, but are resourced to projects and work through issues/bugs. Over the weeks (hopefully) their confidence and independence increases, until by the end of the program, they are ready to graduate.

I don’t think this needs to change that much. I just think that instead, we just need to tweak what we do in each block, and add one crucial, key distinction: when to use AI, and when to not use AI.

With that out of the way, here’s what I think the revised timeline should look like:

  • Phase one (2 weeks):
    • Onboarding and familiarity with how the company works – how projects are structured, what the development cycle looks like, etc.
    • How an ideal software application is structured and learning how to interact with software as a developer. This means making sure they are familiar with version control, testing, release processes.
    • Practice using AI – Claude Code, Cursor, whatever – and also not using AI, if the situation doesn’t demand it.
    • It’s still valuable to work through programming concepts and maybe even some language specifics, but we don’t need to spend as much time here as we used to.
    • Just like before, there should be an emphasis on shoulder surfing and watching other engineers solve problems. AI actually provides an added bonus here: apprentices should be encouraged to have an LLM up to ask questions to about what they are seeing. This should help them maintain context for what the engineer on the other side of the screen is doing.
    • Encourage experimentation with AI in the appropriate places: exploring codebases, scaffolding projects and as a partner to create issues.
  • Phase two (9–10 weeks):
    Where the magic happens:
    • Continue to keep the focus on working with other engineers, both surfing and also pairing.
    • Encourage building familiarity with the terminal. No matter how good the AI gets, allowing an LLM unfettered access through SSH to a production server gives me the willies.
    • Work on their own tickets and issues; gain independence; generally just keep learning and doing!
    • Practice apologetics. Work the tickets, commit the work and open a PR: and then set-up time with an engineer to talk through the code and the decisions made. These meetings shouldn’t feel adversarial, or hostile. This does of course affect velocity; but it also means that the apprentice is getting the best of both worlds – the real-world practice of using these tools, and a grasp of what the underlying code produced is actually doing.

This is moderately similar to what it used to look like, but what has changed is the emphasis: both phases spend less time on rote syntax and more time on how we work – version control, debugging, incremental change, and how to not break production.

The organization’s job

Production is down, don’t blame the intern.

It’s pretty simple: let them shoulder surf, and encourage them to ask why the person they are shadowing makes a certain engineering choice. Make sure to encourage best practices (hopefully by example). Learning the industry in 2026 can feel daunting, but it should also feel quite exciting. This technology is still nascent, and safeguards across all levels are holier than Swiss cheese: like using a $20/month Figma subscription to get $70,000 in tokens. So really, all we need to do is train them in good software engineering practices and get out of their way!


Conclusion

I want software engineering apprenticeships to continue. I continue to hope that they are viable in 2026, and I know I’m not alone. But I do know that as a working engineer right now, I don’t have a clean answer to the problem between learning and working – and neither does anyone else.

What I know is this: the job isn’t going away, it’s changing – faster than anyone can keep up. The guy who coined the term ‘Vibe Coding’ in the first place feels overwhelmed – so don’t feel bad. But despite the assault of new tools and technologies, we do still need people who understand systems, code, risk, context, and consequences. Run an apprenticeship in 2026 – no one’s going to regret inviting more humans to the party when the robots start acting out of line.