Gen Z ChatGPT Arcade Troubleshooting

So I'm not saying I use GPTs or LLMs for arcade purposes but my company has access to(which means I do) a private, pro workspace containing models o1, o3-mini more skilled at reasoning, and o3-mini-high to assist engineers (like myself) to breakdown highly complex full stack coding for our web apps. We also have access to 4.5 GPT which showcases the research preview OpenAI is putting out. The big deal about 4.5 is its extensive scraping of human-based research across as many sources as it can find on the open web, many of which you can actually see come from here. If they stay on this path with 4.5 GPT and further, maybe not so much the o1 LLMs, we may actually see a highly viable option in obtaining very useful information for arcade related tasks. Here's me asking it how to accommodate for wood aging and how T-molding no longer fits in the increased size groove. It's interesting to see the evolution. This was one of many cited quick and more in-depth fixes it offered me based on the tools I had that I fed it, what type of wood the cabinet was made of, what type of fix I was looking for, etc. All of which it prompted me for first. 🤷‍♂️


The thing about all of what you just posted is you basically used to be able to just google that topic, and the first page of hits would have been links to this and other arcade forums, with very relevant info. (Instead of a page of sponsored ads for shit to buy.)

Or you could have just searched here and found the same posts.

This is what's irritating about these AI companies and their LLM's. In many cases they're actually not much better than a Google search used to be, 10-15 years ago. They just seem more 'friendly' to use. Because people would rather 'ask their friends' than actually research things.

The fact is, the knowledge that informed the answers ChatGPT gave you came from people. People here and elsewhere. Society didn't mind Google organizing the web's knowledge, and providing links to it, because those links would drive traffic to those sites. That traffic benefited those sites, site owners, communities, etc.

Things like ChatGPT are just hoovering up that info and using it for their OWN profit, without providing any of those additional benefits to the people who created and curated that knowledge originally. And that is a net negative for society.

It might seem to YOU as an individual like it's easier to use. Maybe because it saves you a few clicks and some reading, and you get an answer spoon fed to you. But it's also the answer the *LLM* decides to give you, instead of YOU using YOUR brain to evaluate a bunch of different answers, and pick the best one. It removes your thinking and choice from the process. It turns you into a passive receiver of information, not an active participant in the process.

It actually obfuscates knowledge from you, because it doesn't give you the ability to explore and think for yourself. It's only concerned with giving you AN answer that it thinks you'll be happy with. It's like how the Spotify Top 100 becomes what people listen to, because people check out the Spotify Top 100 to figure out what to listen to. You get a self-reinforcing system, that narrows choices, and maximizes benefit for the company that owns the system. Not you.

These differences have additional consequences, which most people don't realize right now, but are affecting everyone in the long run. Often in ways you don't realize are bad until it's too late.
 
The thing about all of what you just posted is you basically used to be able to just google that topic, and the first page of hits would have been links to this and other arcade forums, with very relevant info. (Instead of a page of sponsored ads for shit to buy.)

Or you could have just searched here and found the same posts.

This is what's irritating about these AI companies and their LLM's. In many cases they're actually not much better than a Google search used to be, 10-15 years ago. They just seem more 'friendly' to use. Because people would rather 'ask their friends' than actually research things.

The fact is, the knowledge that informed the answers ChatGPT gave you came from people. People here and elsewhere. Society didn't mind Google organizing the web's knowledge, and providing links to it, because those links would drive traffic to those sites. That traffic benefited those sites, site owners, communities, etc.

Things like ChatGPT are just hoovering up that info and using it for their OWN profit, without providing any of those additional benefits to the people who created and curated that knowledge originally. And that is a net negative for society.

It might seem to YOU as an individual like it's easier to use. Maybe because it saves you a few clicks and some reading, and you get an answer spoon fed to you. But it's also the answer the *LLM* decides to give you, instead of YOU using YOUR brain to evaluate a bunch of different answers, and pick the best one. It removes your thinking and choice from the process. It turns you into a passive receiver of information, not an active participant in the process.

It actually obfuscates knowledge from you, because it doesn't give you the ability to explore and think for yourself. It's only concerned with giving you AN answer that it thinks you'll be happy with. It's like how the Spotify Top 100 becomes what people listen to, because people check out the Spotify Top 100 to figure out what to listen to. You get a self-reinforcing system, that narrows choices, and maximizes benefit for the company that owns the system. Not you.

These differences have additional consequences, which most people don't realize right now, but are affecting everyone in the long run. Often in ways you don't realize are bad until it's too late.
I can get down with the passive convenience being a serious concern.

I think of Wall-E interestingly enough if anyone's familiar with how excess and laziness literally destroyed the world.

I still think 4.5 is a step in the right direction. Deep research protocol that, for once, cites its sources in feedback to you, highlighting still the human element to its aggregation.

Where I have a unique perspective is in my experience as both a multi-disciplined intelligence collector and analyst at different times. I see no negative effect when used in the effort of performing research into these topics that an advanced search engine like a forum can't do. I know this because I've helped design plenty. True generative AI is no more a search engine or a lazy copout than searching for and reading thread after thread of all the people that had to trudge through the mud to learn a solution together before they found it. Cool, so you're still operating off the back of someone else's work, just less "elegantly" I say sarcastically. It's a tool like any other. And one viable in its application as it moves forward for hobbyists in any category. The ability for me to utilize an assistant that cites the human effort spent over years to both gain and report knowledge on a particular boardset for example is no different than me doing that same research on those same sources. Now, though, there is no guesswork in my understanding. Because with the right LLM design, the corpus is well contained and contributes to the overall knowledge base of the AI as it learns. What you train will get better for you. One of our software platforms utilizes this best practice. You don't get all the answers overnight. You train it. You tune it. It becomes another FTE on your team. With the right deep research model challenging you (like 4.5 is in its infancy of doing) with additional prompting about WHY you might THINK you should be looking for something when, in fact, you're on the wrong path, is hard for a human to emulate unless they're right next to you performing the same research and are more knowledgeable than you.

You mentioned Google being nothing but ads now and how it really did used to be an amazing search tool. This is a travesty but the right GPT performing deep research can uncover the nuggets of information in such concise delivery as to assist in the overall advancement of individual knowledge.

Where GPTs shine in societal advancement is their use in defense. I mentioned I work in cybersecurity and specifically in cyber threat intelligence analysis., cyber threat hunting, and security operations. The things I've seen generative models and threat emulation do for net defenders is downright ridiculous in efficacy of intelligence collection, processing, analysis, dissemination, and consumption across tactical, operational, and strategic levels and decision making policy. Things that were not possible 10 or even FIVE years ago. Large corporations to mid-market business and FCEB, DoD, and IC organizations are leveraging these toolsets to combat extremely nasty stuff from highly skilled, evolving adversaries utilizing AI based cred stealers, remote access trojans, espionage and SO much more. There is a place for the application of these toolsets to defend the very fabric of society and I personally witness this with clients everyday.

Coming from military intelligence, our analysis efforts from even a unit tactical perspective could never have saved as many lives as it did on my last deployment for example. As with all things, too much of it is bad. But use it just right and you have a world changing phenomenon.
 
Last edited:
So I'm not saying I use GPTs or LLMs for arcade purposes but my company has access to(which means I do) a private, pro workspace containing models o1, o3-mini more skilled at reasoning, and o3-mini-high to assist engineers (like myself) to breakdown highly complex full stack coding for our web apps. We also have access to 4.5 GPT which showcases the research preview OpenAI is putting out. So I put it to the test with sample questions a hobbyist might come asking it. The big deal about 4.5 is its extensive scraping of human-based research across as many sources as it can find on the open web, many of which you can actually see come from here. If they stay on this path with 4.5 GPT and further, maybe not so much the o1 LLMs, we may actually see a highly viable option in obtaining very useful information for arcade related tasks. Here's me asking it how to accommodate for wood aging and how T-molding no longer fits in the increased size groove. It's interesting to see the evolution. This was one of many cited quick and more in-depth fixes it offered me based on the tools I had that I fed it, what type of wood the cabinet was made of, what type of fix I was looking for, etc. All of which it prompted me for first. 🤷‍♂️

View attachment 809139
Ya know this reminds me of how the real way to get an answer on google these days is just to ask your question then add "Reddit" at the end.

And hey, result #3 gives about the same info.

1743125774150.png

Which I guess is why Google is now training its AI search on Reddit data.
 
Ya know this reminds me of how the real way to get an answer on google these days is just to ask your question then add "Reddit" at the end.

And hey, result #3 gives about the same info.

View attachment 809145

Which I guess is why Google is now training its AI search on Reddit data.
Lol too true, "focus on Reddit please, Google, thanks"
 
LLMs do a lot more than just find stuff, it's a bit apples-to-oranges attempting to draw parallels between LLMs and a search engine

and 20 years ago (or whatever arbitrary span of time we wanna cite) many communities were saying the same thing about the internet

it has been observed, since industrial times (and arguably prior), Western societies - above all other values - esteem "efficiency" as highest of virtues. efficiency as a prime directive has completely changed the way humans produce, operate, organize, and deliver goods and services, and has fundamentally changed our relationships with technology and each other (i just saved you 500+ pages of reading https://www.amazon.com/Technological-Society-Jacques-Ellul/dp/0394703901). AI was an eventuality and a necessary next step in the quest for greater "efficiency"

until everything collapses, AI is not just here to stay, it is already and will be a part of each of our lives. talk all you want about right/wrong/good/bad/healthy/toxic, it doesn't matter 🫤
 
Ya know this reminds me of how the real way to get an answer on google these days is just to ask your question then add "Reddit" at the end.


It's the ultimate embodiment of the statement (I think from Cory Doctorow) that the internet is "Five giant websites, filled with screenshots of the other four".
 
But, anyway, we can all agree or disagree that AI is "bussin" and/or "mid" and/or has "rizz". Let's focus on the important stuff here, guys.
 
until this thread has a mention of skibidi toilet, i'm unconvinced it's really living up to its full zoomer potential

zoomer-dancer.gif


edit: speaking of search, if KLOV search is correct, my post is literally the first mention of skibidi toilet on KLOV. see, i know the kid lingo
 
LLMs do a lot more than just find stuff

I actually mostly don't agree.

I think they mostly just find stuff. They just obfuscate the process and sources from you. Instead of presenting results to you as a page of 10 blue links, it presents it in a more narrative/conversational form, that feels more friendly, and more 'intelligent'. It's integrating ideas and information into a different format. But ultimately it's the same candy bar, just with a different wrapper.

There's no question there's legitimate underlying technology that has enabled what LLM's do. And there are plenty of other applications of *machine learning* that are are enabling science and technology in truly novel ways, that increase the rate of progress.

But at the end of the day, most of the stuff where I see people saying, "Hey, I used ChatGPT to do this useful thing", I always end up saying to myself 'Yeah, and most people could do that 15 years ago with a Google search and 2 minutes of reading'. Not to mention a much slower computer and internet connection. We didn't need to build city-sized data centers that consume enormous amounts of energy to answer basic questions.

Personally I greatly preferred how it used to work, because it allowed me to be an active part of the process, and discover and learn other things along the way. And then it ruined itself because money. And it got so shitty that it feels like many people forget (if they are even old enough to remember when it was good). So they are impressed with these new 'advancements', which are still not as useful IMO as Google was 15 years ago.

But yes you're correct, society doesn't select for what's interesting or even better for people. It selects for convenience, efficiency, and profit.
 
not to bicker about it, but, LLMS can create unique content, translate languages, help us improve our writing, automate processes. again a lot more than just find stuff. a lot of the stuff it provides allows people to extend their capabilities beyond their own limitations. humans are limited, and some people are just shitty writers, slow witted, uncreative.

some people still use traditional manual plastic-handle-metal-shaft-machined screwdrivers, some probably electric screwdrivers. both are valid in their own contexts, and some folks prefer one over the other in certain situations. that's AI. anything beyond that is hair-splittery and a fruitless dialectic. most of us can't raise animals, grow our own food, repair our own homes, doctor ourselves, fashion clothing either 🤷‍♂️

no one's gotta like it, but AI helps people level-set their capabilities within the context of their peers. welcome to the modern West
 
not to bicker about it, but, LLMS can create unique content, translate languages, help us improve our writing, automate processes. again a lot more than just find stuff. a lot of the stuff it provides allows people to extend their capabilities beyond their own limitations. humans are limited, and some people are just shitty writers, slow witted, uncreative.

some people still use traditional manual plastic-handle-metal-shaft-machined screwdrivers, some probably electric screwdrivers. both are valid in their own contexts, and some folks prefer one over the other in certain situations. that's AI. anything beyond that is hair-splittery and a fruitless dialectic. most of us can't raise animals, grow our own food, repair our own homes, doctor ourselves, fashion clothing either 🤷‍♂️

no one's gotta like it, but AI helps people level-set their capabilities within the context of their peers. welcome to the modern West


People love to reference the example of 'should we farm our own food as well?', etc.

However the difference with these LLM-powered 'tools' involves questions that I think not enough people are asking themselves:

- Do these tools give you MORE agency? Or do they take it away from you?

- Are they extending your capabilities and value, or reducing it?

- If the value of your existence as a worker boils down to pasting stuff into an LLM and emailing the result to someone else, at what point are you no longer needed in that process?


And again, I'm not disagreeing with stuff like translation (though I put that more in the category of ML, as opposed to LLM). But there's a difference between a tool, and how that tool is used. We shape our tools, and our tools shape us.

It was one thing when machines replaced horses. But what happens when we are the horses?
 
Last edited:
we've been having this same discussion and raising the same questions as a society for a couple hundred years throughout waves of industrial and technological revolutions. the idea that suddenly we've crossed a line is wildly subjective.

as a rhetorical exercise perhaps, i get your point(s) but there's nothing beyond that. it's moot

to circle back, there's very real, helpful potentials staging an agent with datasets comprised of arcade game architecture, design, and repair information.
AI is not "OMG SOOO AMAZING, LITURALLY REINVENTING REALITY" shit but there's a lot of challenges it can help solve

stop being a skibidi toilet, dude (or start being one? i don't know if that's considered good or bad in zoomer culture)

- Do these tools give you MORE agency? Or do they take it away from you?

- Are they extending your capabilities and value, or reducing it?

- If the value of your existence as a worker boils down to pasting stuff into an LLM and emailing the result to someone else, at what point are you no longer needed in that process?


And again, I'm not disagreeing with stuff like translation (though I put that more in the category of ML, as opposed to LLM). But there's a difference between a tool, and how that tool is used. We shape our tools, and our tools shape us.

It was one thing when machines replaced horses. But what happens when we are the horses?
 
Back
Top Bottom