andrewb
Well-known member
So I'm not saying I use GPTs or LLMs for arcade purposes but my company has access to(which means I do) a private, pro workspace containing models o1, o3-mini more skilled at reasoning, and o3-mini-high to assist engineers (like myself) to breakdown highly complex full stack coding for our web apps. We also have access to 4.5 GPT which showcases the research preview OpenAI is putting out. The big deal about 4.5 is its extensive scraping of human-based research across as many sources as it can find on the open web, many of which you can actually see come from here. If they stay on this path with 4.5 GPT and further, maybe not so much the o1 LLMs, we may actually see a highly viable option in obtaining very useful information for arcade related tasks. Here's me asking it how to accommodate for wood aging and how T-molding no longer fits in the increased size groove. It's interesting to see the evolution. This was one of many cited quick and more in-depth fixes it offered me based on the tools I had that I fed it, what type of wood the cabinet was made of, what type of fix I was looking for, etc. All of which it prompted me for first.![]()
The thing about all of what you just posted is you basically used to be able to just google that topic, and the first page of hits would have been links to this and other arcade forums, with very relevant info. (Instead of a page of sponsored ads for shit to buy.)
Or you could have just searched here and found the same posts.
This is what's irritating about these AI companies and their LLM's. In many cases they're actually not much better than a Google search used to be, 10-15 years ago. They just seem more 'friendly' to use. Because people would rather 'ask their friends' than actually research things.
The fact is, the knowledge that informed the answers ChatGPT gave you came from people. People here and elsewhere. Society didn't mind Google organizing the web's knowledge, and providing links to it, because those links would drive traffic to those sites. That traffic benefited those sites, site owners, communities, etc.
Things like ChatGPT are just hoovering up that info and using it for their OWN profit, without providing any of those additional benefits to the people who created and curated that knowledge originally. And that is a net negative for society.
It might seem to YOU as an individual like it's easier to use. Maybe because it saves you a few clicks and some reading, and you get an answer spoon fed to you. But it's also the answer the *LLM* decides to give you, instead of YOU using YOUR brain to evaluate a bunch of different answers, and pick the best one. It removes your thinking and choice from the process. It turns you into a passive receiver of information, not an active participant in the process.
It actually obfuscates knowledge from you, because it doesn't give you the ability to explore and think for yourself. It's only concerned with giving you AN answer that it thinks you'll be happy with. It's like how the Spotify Top 100 becomes what people listen to, because people check out the Spotify Top 100 to figure out what to listen to. You get a self-reinforcing system, that narrows choices, and maximizes benefit for the company that owns the system. Not you.
These differences have additional consequences, which most people don't realize right now, but are affecting everyone in the long run. Often in ways you don't realize are bad until it's too late.

