Revelry

AI-Driven Custom Software Development

Ai ghost

I have no trick but I must treat: 7 Reveler Perspectives on AI

It’s spooky-monster season and everyone’s favorite sci-fi horror monster, AI, is having an unprecedented intersection with real life. I sat down with seven Revelers and asked them some pointed questions about AI, both real and fictional. Enjoy 🎃!

Revelry image image

What will you do jobwise when AI makes your job obsolete?

BEN: I was seriously thinking about manual work, like plumbing and stuff like that. Stuff that AI can’t actually handle, like an electrician, that kind of job.

DAVID: That’s a good question. I don’t believe AI will make my job obsolete. I foresee that it may change my job but with the level of AI that I have seen so far, I don’t really see my job becoming obsolete in the near enough future that I have to worry about that. I think it will just shift to focusing on more business related problems or more complex problems that are not as easily handled by AI.

JASON: That’s not going to happen, but … I would like to work in a climbing gym or bike shop.

GLEN: Jobwise, I have no idea. I enjoy playing with computers. I will probably still do that on my own. If no one will pay me to do it then, no one will pay me to do it. But, I’m not overly worried that AI is going to show itself to be valuable enough to do that. Now, whether hype will cause that to start happening all over the place is a different story. So, yeah. I don’t know. There’s not a whole lot I can do about it and I feel like if something like that does happen things are going to be pretty different all over the place, so I don’t feel like it’s necessarily something to try to start planning for. I’m probably just screwed and I will be running around trying to find some way to make some money, but y’know.

JESS: I will become a real estate agent or a massage therapist or a tradesperson.

PJ: I can’t tell if that’s a joke or if that’s a real question. A little bit of both. Assuming that AI doesn’t also make audio engineering obsolete, I probably would go back to that. Live sound.

REBECCA: I don’t think AI will ever make my job obsolete.

Favorite fictional AI?

BEN: Ooh I love Isaac Asimov. “I, Robot” is one of my favorite books. That kind of stuff I like.

DAVID: I can’t say that I know very many but the first one that comes to mind actually is the Star Trek systems integrated “AI” running in the ships. It is AI but almost acts as if it’s not AI and Star Trek is just one of my favorite shows so that’s one that immediately comes to mind.

JASON: Wall-E.

GLEN: I do like HAL. HAL was honest. Hal was to the point. HAL’s a pioneer. HAL was the first. I like an AI with character, so the Agents in the Matrix were interesting. Hal’s a good one. Something where you know where you stand with it, right? That’s important.

JESS:  Pat, the virtual home assistant in Smart House (1999).

PJ:  The lady from Smart House. Remember that Disney Channel Original Movie? Of course she goes rogue, like they all do. I don’t remember her name, but, yeah. I’ll go with that.

REBECCA:  I’m not really a sci-fi person, but the first one that comes to mind is the old Disney movie Smart House. I was terrified of that as a kid.

Who would win: HAL (2001) or AUTO (WALL-E)?

Revelry image image 3

BEN: HAL.

DAVID: I think HAL would win.

JASON: Oh. I think HAL.

GLEN: Oh HAL.

JESS: AUTO.

PJ: Both of those movies are very fuzzy in my brain. I’ll just go with AUTO? I do like WALL-E so I’ll go with that.

REBECCA: I’ve never seen WALL-E, so I’ll give HAL the win by default.

How far away from AGI do you think we are?

BEN: Part of me would say, we’re a couple years away.  For how fast it’s going, I feel it might be 5 years away. But another part of me says we’re going to hit a wall at some point and we might not make it in another 10 – 15 years or even longer – so I’m kind of on the fence about that. But if I were to guess, I would go with my gut on the latter. So I would say we would hit the wall at some point and not reach general AI but be as close to it as possible in 15 – 20 years.

DAVID:  I think much farther than people expect depending on your definition of AGI. The current AI paradigm that we are in sounds very human in a lot of scenarios, but it still lacks correctness. Verifiable correctness is a huge requirement for AGI that I don’t think we are close to, so I think at least 10 – 20 years out for true AGI.

JASON:  I think it depends, because there are a lot of incentives to move the goal posts but in opposite directions. So I think we will agree that the term AGI is meaningless right about the time it happens as we sort of understood it five years ago maybe…. Ten years?

GLEN:  Infinitely. The amount of resources that it would take is something we do not have. The metrics that we have for measuring the progress that has been made for LLMs are weighted at best. There’s a lot of hype driving it. It doesn’t do much, even now with the insane resources that are being thrown into it. So I think maybe if something is achievable it’s not going to happen with what we’re doing now. The cost is so insanely high that it’s not going to make that jump before something catastrophic happens and there are no more resources, whatever those resources may be that are needed to get to that point.

JESS:  Thankfully, pretty far.

PJ:  That’s a can of worms. I feel like AGI is kind of something we made up and it’s sort of a moving target, and isn’t exactly clearly defined, so to say that we’ve “reached AGI” feels like a sort of fuzzy assertion to make, but from what I think that we intend for that term to mean, which is, “the AI is as smart and capable as a human and can do all the things that we can do without our help”, I don’t know. A hundred years?

REBECCA:  I don’t know enough to know, but I think we’re further than a lot of people say we are.

Favorite LLM as of now?

BEN: Claude

DAVID: My favorite LLM that I have is used probably is going to be one of the Qwen models. I did some local LLM development with the Qwen, I think it was 14-B, models, and it was quite impressive, its performance, even on modest hardware for a local LLM setup.

JASON: I like the Claude family. Opus, Sonnet, and Haiku.

GLEN: I still mostly avoid them. I don’t do much with them. I don’t know. There’s honestly not a whole lot –  I’m a cynic. I distrust everything. I’m not a big fan of corporations. I don’t trust them. So I think as far as Large Language Models go, I don’t know that any of them are necessarily trustworthy. They’re all controlled by one entity. It’s all behind a wall. You can’t see what’s going on with any of it. Other than that they all seem to work roughly the same. It really just depends on what has the best information for what you happen to be asking in its training data.

JESS: Claude.

PJ: For coding stuff I use Cursor. I haven’t really played around with other LLMs just ‘cause I’m using Cursor and it does ok with that. I have ChatGPT on my phone that I don’t use that often, but for just everyday stuff like, “what temperature do I cook my chicken breast ‘til?”. I could look it up on Google, but sometimes it’s easier to just chat with it. So I would say, for everyday personal stuff, ChatGPT and for coding work stuff, Cursor.

REBECCA: I find myself using the AI tools and features that naturally fit into my workflow rather than having allegiance to one particular LLM.

What is that LLM going as for halloween?

BEN (Claude): a Steve Jobs style uniform

DAVID (Qwen): Probably as a nerd or a programmer because they have a bunch of coder specific models that they are trying to do so I think that makes sense.

JASON (Claude family): Wall-E.

GLEN (No favorite): A lie.

JESS (Claude): A ghost?

PJ (Cursor, ChatGPT): Each other.

REBECCA (N/A): Something with way too many fingers.

Where do you find yourself using AI in your day to day?

BEN: Debugging, I feel like it’s very helpful having that extra set of eyes. Sometimes I miss something in the code that I can’t see, I actually can’t see the difference in the words, like if I missed a capital letter or misspelled something, sometimes my brain… it’s not that I have dyslexia but it would switch up stuff and I would miss it like those puzzles where the thing is right in your face, and you can’t see it until someone points it out. And also opinions, I’ll sometimes just ask opinions and see what it says. I’ve found myself recently out of curiosity using it and giving a block of code, and asking the AI to review for feedback (sometimes useful, other times I don’t agree with it exactly).

DAVID: Actually, I find myself using it the most with the Google integrated AI since it automatically runs when I run a Google query and looks up links and stuff. That, actually, I think I’ve gotten the most use out of.

JASON: Most recently, I’ve been writing blog posts so it’s been really helpful in catching grammar mistakes and suggesting places that are unclear. I don’t do as much with coding because I don’t have a large volume of code to write often. It’s more often a few very choice edits here and there.

GLEN: Well, I have been using it some, for work, as far as tasks for things that we’re building out. On one project we were using it to try to help users generate content and substance for disability reports. So everywhere from data that needs to be correct, which, I don’t know, we put caveats around it, to things trying to fill out boilerplate. Get through “the stuff” that for whatever reason needs to be there but no one wants to write or generate.

JESS: Mostly just, I’m pasting a lot of emails into it and asking, “does this sound right?”

PJ:  I do have a little script I wrote to write my standup every morning. It hits the Revtime API to get my time logs from the day before and then hits the RevAI API to formulate those into a standup. At first when I got Cursor I would try to give it more complex tasks, like, “Write this liveview for me” or “I’ve written this liveview, write all these tests for me”, trying to give it as much context as possible, but I’ve found that it ends up taking me more time to go back and forth with it to get it to do it the right way or the way that I want it to do it, or the way that’s going to work or be readable. So instead I give it smaller tasks – but to be totally honest I don’t use it that much. I think at this point what I use it for a lot is explaining things to me, not actually writing code, because I don’t typically like the code that it writes. So sometimes it’ll just be a question that doesn’t necessarily even have code in my repo to reference, but just like, “how does such and such Postgres unique index work when you do this, that, or the other?”. Maybe I’ll say, “Oh here’s my migration, explain how this works”. or something else I gave it was that there was this giant doc block on a function that hadn’t been updated in a while and in the function was a list of permissions and I said, “Update this doc block and document a description of what each permission does”. So, sort of an easy task with not really much area to screw it up ‘cause it’s just a doc.

REBECCA: I would say mostly for research or analyzing customer feedback. I don’t really use it for writing. I find that synthesizing my thoughts into writing helps me communicate more effectively and ensures that I truly understand what I’m saying, so I prefer not to rely on an LLM for that.

Who would you rather Trick or Treat with, Alexa or Siri?

BEN: Siri. I would say neither but if I had to choose I would say Siri. I don’t like Alexa, I don’t use any Alexa products so definitely not Alexa. Siri… I’ve used it so, that’s the closest I would trick or treat with someone.

DAVID: Last time I used an iPhone with Siri was over a decade ago, and they just redid Alexa to be nicer, so I’m going to go with Alexa.

JASON: Alexa.

GLEN: Probably Siri, as I’m marginally more confident that it wouldn’t smuggle a bunch of information about my route, the candy I got, candy I traded etc back to an entity waiting to charge me more for toothpaste and certain candies.

JESS: Neither.

PJ: Ooh. I think it might be fun to trick or treat with both of them, because they can then just sort of talk to each other back and forth and I can just watch and be amused.

REBECCA: Neither

Do you think we are in an AI bubble or more of an AI Mylar balloon situation?

BEN: I would say it’s a bubble – but at the same time it’s a game changer, so it’s not a bubble that’s going to burst. It’s more of a way we have to adapt in the future, like when autocomplete was a thing when we started programming. Before, you would program on Notepad, you wouldn’t have autocomplete, or the compiler would only show the errors when you compiled it and then came autocomplete and you would get warnings as you wrote code. So I feel like it’s like that.

DAVID: I think we are in an AI bubble similar to the crypto bubble and the dotcom bubble in that AI has not yet found its niche, if you will. Everyone is trying to use AI for everything and AI is legitimately good at very many things, but people are trying to make it good at everything and it’s creating a bit of an AI bubble, in my opinion.

JASON: There’s definitely a lot of hot air, no matter what it is. Yeah you could call it whatever you want. Might be a zeppelin.

GLEN: I think it’s more like a warhead that’s going to go off. If you look at – what, 30%, of the S&P value is in a company that is somehow pretty directly related or involved in LLM style AI work. Nvidia, Microsoft, Google, Open AI, Meta to a degree. That’s a lot of the entire stock market and their revenue pales in comparison to what they’re spending.

JESS: I think “Mylar balloon” situation is a good way to put it.

PJ: I do think that we’re in a bubble. I have no idea how long the bubble will continue to grow and be there. I do think at some point it’s going to pop, and I think it’s going to be similar to other bubbles we’ve seen where a select few players remain in the game and continue to be used. I do think that AI is a genuinely useful tool and has legitimate use cases and there is enough demand for it to be used, but I do think that it is a bubble right now.  When it does pop the majority of the players will either go out of business or be consumed by a few bigger players who will remain around for a while.

REBECCA: I think it’s more of a balloon situation. I do see value in AI, but there’s a lot of extra air in there right now that’s going to deflate.

What do you call your AI in your head?

BEN: I call it “Chad”.

DAVID: I don’t think I personify it nearly as much as people do, so it doesn’t really have a name in my head. The closest name would be the model name. I’ve never really prescribed a name to it. Which is odd because I do anthropomorphize many other things but for some reason not LLMs.

JASON: Robot.

GLEN: Dumbass. When I’m talking to myself it’s usually because I’ve done something dumb.

JESS: Buddy.

PJ: Bro.

REBECCA: Me and AI aren’t that close. We don’t have that kind of relationship.