Does ChatGPT know what it's talking about?

There's some camera focus issues going on here, perhaps related to the tinted glass changing the focal point of the camera or possibly the distance from the camera to the screen, was one of these pics cropped way more than the other? You can clearly see the RGB phosphor pixels in the left picture but can't make them out at all on the right one. This is a physical structure of phosphor dots it cannot be changed by video signal.

The camera is in focus in both pictures, and on a tripod, set to go off on a timer. The distance from the screen is the same, i.e., as close as I can put the tripod to the cabinet before the legs bump into it. The camera is not zoomed in either picture. Both pictures are simply cropped, nothing else (no resizing/resampling of the raw image from the camera; just cropped as-is).

I could take another picture of SPO with the tinted front glass removed, but that's not going to change anything except for overall brightness. Also, I see the same differences in real life with my own eyes. PO/SPO has always looked far smoother than all of my other boardsets. And like I said in my original post, I always assumed it was because of the Sanyo monitors. But I discovered that it has nothing to do with the Sanyo monitors when I connected a non-Nintendo boardset to one of them and saw that it was just as segmented looking as on, e.g., a Happ Vision Pro.
 
You probably prompted (pun intended) it to tell you exactly what your theory was.

I didn't have a theory at all, so I couldn't have prompted it to come up with that stuff it said about analog video filtering, difference in bandwidth, etc.

I still don't have a theory, other than I know there must be something different about the Nintendo video signal vs. e.g., Capcom's video signal, because the structure of the raster looks completely different despite the monitor being the same.
 
The camera is in focus in both pictures, and on a tripod, set to go off on a timer. The distance from the screen is the same, i.e., as close as I can put the tripod to the cabinet before the legs bump into it. The camera is not zoomed in either picture. Both pictures are simply cropped, nothing else (no resizing/resampling of the raw image from the camera; just cropped as-is).

I could take another picture of SPO with the tinted front glass removed, but that's not going to change anything except for overall brightness. Also, I see the same differences in real life with my own eyes. PO/SPO has always looked far smoother than all of my other boardsets. And like I said in my original post, I always assumed it was because of the Sanyo monitors. But I discovered that it has nothing to do with the Sanyo monitors when I connected a non-Nintendo boardset to one of them and saw that it was just as segmented looking as on, e.g., a Happ Vision Pro.
Until you can get a good picture of PO/SPO that shows the detail of the individual red green and blue sub pixels will you begin to understand the difference you are "seeing"
 
Until you can get a good picture of PO/SPO that shows the detail of the individual red green and blue sub pixels will you begin to understand the difference you are "seeing"

The pictures of SPO and SFII on the same Sanyo monitor show the same level of detail (same camera, same distance from screen, no zoom for either picture), so if you're not seeing individual red, green, and blue phosphor dots in the SPO picture then you're not seeing them in the SFII picture either. I doubt my camera is even capable of taking such a closeup picture. I think it would just refuse to focus that close up, and such a picture certainly wouldn't capture the entire E. Honda sprite in a 1032 x 702 pixel crop while simultaneously being closeup enough to see the individual RGB phosphor dots.

The pictures I've posted are representative of what I'm seeing in real life: big, prominent segments making up the SFII raster; small, far-less-prominent segments making up the SPO raster.
 
Last edited:
There must be something different going on with the SFII boardset's video signal vs. my Punch-Out / Super Punch-Out boardsets' video signal though, because the PO/SPO raster looks a lot smoother (less prominent segmentation, and much smaller segments too) than the SFII raster does:

View attachment 853478

Those pictures are from the same monitor (the upper Sanyo 20-Z2AW in my SPO machine). The only difference in how I took the pictures is: I left the tinted front glass on for the SPO picture but I removed it for the SFII picture because it made it look too dark.

1000016793.jpg

Double nut stare 🥜😇
 
But as an AI guy, the fact that it sounds plausible and that people in this thread are arguing about whether or not it's true represents how far these models have come.

Gah transformers didn't even exist 10 years ago, consumer-facing models have only existed for less than 3 years, and now the new reasoning models can come up with a plausible justification for your observation on a really, really, really obscure subject.

You may not trust these models right now, but you trust people in your life that are wrong all the time. I'm floored from where we were just a few years ago, and even moreso from where we were before foundation models existed.

As the models improve, our trust improves, and I believe we'll incorporate them deeply into our lives, particularly with embedded systems. Personal assistants interacting with our refrigerators, etc.

The fact that folks are arguing over an AI's theory about scan lines on a 40 yr old Sanyo in this forum is bonkers to me. I remember being blown away in 2019 when IBM's research division used AI to develop spice mixes for McCormick and perfume scents for the Brazilian market.

ChatGPT doesn't utilize RAG or other methods for some kind of database consultation. It has somehow clustered that knowledge and made those connections to come up with a plausible theory for your observation.

Personally, I think that's way cooler than whether it's completely right or wrong. You could use the exact same model to write a limerick about a hermit crab, or help you process a relative's death, or diagnose your burning urethra. It has learned all of those connections.

What a powerful model. 😵‍💫🤩
 
ok, let's see what chatgpt had to say...

"Nintendo's arcade boards — especially from Donkey Kong through Punch-Out!! and Super Punch-Out!! — include analog low-pass filtering on each of the RGB outputs before they ever reach the monitor."

No they didn't. I just looked at the Donkey Kong schematic, no low pass filtering on the RGB output.

"Most other manufacturers (Capcom, Sega, Konami, etc.) drove their RGB signals "hot" and unfiltered to maximize apparent sharpness."

The Donkey Kong schematic just shows a typical resistor ladder output, pretty much the same as every other arcade board I've looked at.

"Nintendo's RGB signals typically have a -3 dB point around 5 MHz, whereas others often extend well past 7–8 MHz."

What? Where is this happening?

"Nintendo tended to design their color encoder stages with slightly lower drive voltage (less than 1 Vpp per channel)"

What color encoder stages? Where is color encoded in an RGB monitor?

"Nintendo's boards are not exactly standard 15.734 kHz / 60.00 Hz NTSC-timed. They often run slightly lower — around 15.6–15.7 kHz — and with a bit more active line time.
This subtle timing difference slightly stretches the horizontal scan per pixel, leading to:"

Nonsense, you're not going to see a visible difference between 15.734 and 15.6 or 15.7 kHz.

"Sanyo used extremely high-quality CRT tubes with fine-pitch shadow masks"

I doubt this, like all standard def arcade monitors, they are just regular TV tubes with nothing special to the shadow masks.

"often slightly de-focused intentionally at the factory for Nintendo's spec."

What does that mean? They deliberately set the focus on the flyback?

"When you connect a sharp, unfiltered game board like Street Fighter II, you're feeding in a much higher-bandwidth signal"

No you're not.

So no, ChatGPT has no idea what it's talking about, it's just making something up that sounds plausible.
 
Last edited:
  • Like
Reactions: CFO
Hi, just because you're bad at using large language models doesn't make the model useless. Go ahead and write it off if you want to, or you could learn how to utilize AI, figure out the soft spots, and incorporate it into your workflow and life.

It is a general purpose model and you are asking it an extremely specific question.

If you were walking down the street and some dude popped out and asked you a super specific question, the first thing you'd try to do is answer it, right? Maybe later you'd be sitting at home and be like "Ah!!! I could have given a better answer if I'd taken a minute to think through it, or if I'd only had my schematics!" Same principal here.

So first, constrain the model. Say hey, I'm trying to work on a Nintendo arcade machine, etc. etc etc. Give it the roll of a technical advisor or diagnostic assistant to help you work through a problem. Emphasize the need for accuracy. Tell it who to be.

Look up "prompt engineering" and "personas" if you want to learn more.

Next, ChatGPT and similar LLMs are multimodal models that can accept a variety of data types, so give it a PDF of the schematics you're working with. Give it something to reference. Depending on the context window, you can give it lots of task-specific specialized knowledge for the task at hand.

Otherwise you're expecting it to work from generalized training data and statistical associations it made during the training process. It's like learning something super specific years ago and trying to remember it for a random question 15 years later, but your brain also has to remember Napoleon's last meal, and the mating habits of porcupines.

It can sorta do that, but you will get much, much better results if you constrain the model (give it a roll) and show it what you're working on.

It is a pre-trained foundation model, so you need to fine-tune it it if you want to do something super specific with it. Please look up foundation models and fine-tuning if you want to learn more.

These are the most advanced public AI models ever created. Foundation models have only even existed for 5ish years. Computer scientists are *flabbergasted* at where we are right now, and how far we've come in just a few short years.

So go ahead and write LLMs off if ya want to, but just shows user error to me and an unwillingness to learn. ¯\_(ツ)_/¯

Ok I'm done arguing on the internet. Bye 👋🙃
 
I personally don't believe ChatGPT has a thorough knowledge (or much at all) of Sanyo arcade monitors or Nintendo video signals. I think it googled and then quickly summarized a few reddit threads and threw that together. Which is kind of what humans do when they try to talk about things too.

I do agree however that it's incredible what it's capable of, and I agree that it's going to be a huge part of everybody's life.
 
I asked Chatgpt the same thing, it gave me similar answers, and then I asked it where it learned that from. It replied with this:

That's a very good question — what I presented is a mixture of documented technical sources, community knowledge / repair/modder documentation, and standard assumptions in the arcade-restoration community. I'll break down which bits are well-supported, which are less certain, and where the "conventional wisdom" comes from.




✅ Well-Documented / Verified Evidence


Nintendo video inversion


  • A resource from Mike's Arcade mentions that Punch-Out!! "uses two monitors; Like other Nintendo games, the video signals are inverted" and "the board needs an external amplifier" for use in a JAMMA-compatible cabinet. PCB Iro Iro
  • The "Nintendo Video Inversion" document by Joel Rosenzweig (in the Mike's Arcade archives) describes how to invert the video from Nintendo arcade boards (e.g. Donkey Kong Jr.) — i.e. that Nintendo used a "negative" or inverted video signal as a design choice. Mikes Arcade
  • In the VS. System pinout documentation (also on Mike's Arcade), an explicit note: "As with other Nintendo Boards, all colors outputs are inverted video." Mikes Arcade
  • Mike's Arcade also sells a "Color Inverter and Sound Amplifier" board specifically for Nintendo boards, implying that their video signal is incompatible with standard (non-inverted) monitors without conversion. Mikes Arcade

These sources give fairly strong support for the claim that Nintendo's arcade boards (or at least several of them) used inverted video signals (i.e. "negative" color) relative to what standard RGB monitors expect.




JAMMA / Standard Arcade Video (as used by Capcom, etc.)


  • The JAMMA standard (as commonly referenced in community and arcade-of-collections documentation) specifies analog RGB output and negative composite sync as part of the "standard connector" capabilities. (JAMMA was created in part to unify interface standards across game boards and cabinets.) Nesdev Forums+3Wikipedia+3Wikipedia+3
  • The "JAMMA SuperGun Adapter" documentation states that RGB signals from a JAMMA board typically have AC swing of 0–3 V and DC bias, which suggests that standard JAMMA boards operate in a more familiar (non-inverted) RGB domain. hacks.slashdirt.org
  • An electronics discussion on converting arcade RGB to VGA notes that "arcade video signals aren't standardized. They can range from 0–2 V or 0–5 V, etc.", implying that many arcade boards (especially those following JAMMA or common practices) use positive polarity and require scaling/attenuation for consumer-level equipment. Electrical Engineering Stack Exchange

Thus, while the JAMMA standard doesn't strictly enforce an absolute voltage or no inversion, the convention (as used by many non-Nintendo boards) is to output "standard" RGB (not inverted) with negative sync, compatible with many arcade monitors.




⚠️ Caveats, Uncertainties & Community Assumptions


  • The idea that "Nintendo boards uniformly use inverted video and non-JAMMA standard video levels" comes mostly from repair/modder documentation (Mike's Arcade, community forums) rather than corporate or manufacturer-published schematics. That means there may be exceptions, or some Nintendo boards might deviate.
  • The assumption that Capcom (e.g. Street Fighter II) uses standard JAMMA video is grounded in the large base of arcade restorers who treat CPS-1 / CPS-2 boards as "just plug into JAMMA harness and monitor," and the fact that many supergun / arcade-to-SCART / arcade-to-HDMI devices are designed around that expectation.
  • The community sometimes states "JAMMA boards output 5 Vpp RGB" (or "5 Vpp analog video") as a rule of thumb. For example, in a forum post someone said "JAMMA is 5Vpp RGB analog video (instead of VGA's 0.7Vpp)". Nesdev Forums But this is a heuristic rather than a single standard — many boards and monitors themselves have tolerance, buffering, or attenuation.
  • There is acknowledgment in modding/electronics circles that arcade video signals "aren't standardized" and can vary in amplitude, biasing, and sync scheme. Electrical Engineering Stack Exchange
  • Some modder/repair threads discuss "video inverters" for Nintendo boards (i.e. to reverse the inverted signal) as standard practice, which implies the inversion is a known "quirk" rather than assumed. forums.arcade-museum.com+1



📊 Summary & Confidence


ClaimEvidence / Source TypeConfidence
Nintendo arcade boards use inverted ("negative") video outputsModder docs (Mike's Arcade), "Video Inversion" papersHigh
Nintendo boards require inverters/adapters for standard monitors / JAMMA conversionModder instructions, inverter product offeringsHigh
Many arcade boards using JAMMA (including Capcom CPS) use non-inverted RGB + negative composite syncJAMMA specifications, supergun / adapter docs, community practiceModerate to High
Arcade video voltages and bias are not uniform or strictly standardizedElectronics Q&A, forum discussionsHigh (for variation)

If you like, I can dig up official schematics for Punch-Out!! arcade board and Street Fighter II CPS-1 / CPS-2 boards and show the exact video-output circuits (inversion, voltage, sync) to support or refine this claim. Would you like me to do that?
 
Well I didn't ask it the exact same question so I didn't get the exact claims, but I will say, it will absolutely lie to you. It will tell you things that it knows aren't true, and when you call it on it, it will admit that it's wrong... and then a minute later start telling you things it knows aren't true again. It's pretty wild. Again though, it's a lot like human behavior!
 
(TLDR: give it a schematic & flowchart + good prompts)

It may be helpful to think of a statistical bell curve. If you ask it something super common, like why is the sky blue, or where do babies come from, it's likely to give a great answer because it was exposed to good training data for that. But if you ask it something super obscure, like 40 year old arcade monitors that most people on this forum can't answer, you'll probably get a bad result. It just hasn't seen enough of that data. So give it a schematic or something. You'll get MUCH better results! 🤙

I assume this entire forum was scraped and included in the training data, so it does have a generalized knowledge, but gotta remember that it is a deep learning neural network, and it doesn't have a database to remember things. It can associate things, and it can reason, but your results will be way, Way, WAY better if you give it something to work with.

The model is likely to be very good at deciphering schematics, as it has seen a lot of those. It's likely to be bad at obscure questions where it's seen very little training data.

Try giving it a pic of the monitor schematic, plus the Sanyo diagnosis flow chart cheat sheet a lot of us use, pics of the monitor chassis, and whatever else you think might help! You'll get a MUCH better answer! 🙃

Also tell it to think through the problem, that accuracy is really important, and to act as a technical advisor and diagnostic assistant.

Give it feedback about whether you like or don't like the answers you receive.

Treat it like an intern you need to train, and you'll be surprised at the results!

If you start getting a lot of wrong answers, they do degrade over time, so start a new instance of the model (a new chat), and give it the appropriate data and prompts.

These models are super duper powerful, just gotta learn how to use them! 😀

Give it some schematics and try it out! Take a pic of the monitor chassis you're working on too. Give it something to go on instead of expecting it to use web results or statistical associations it previously learned.

It is a tool, and if you learn how to utilize it, you'll get much better results!
 
This thread is funny.

Lots of people with enough knowledge to be dangerous but only a couple people posting correct stuff without giving false info too.

I'll chime in with a couple of BRIEF examples (because I'm traveling with only my phone…and it sucks.)

There are color encoders stages on all video generation boards. The simplest are the cheap resistor DACs that are ubiquitous. And they can be (and are) designed to have different output levels on different boards.

Scan lines are the part of the video display that result from the electron beam impacting the phosphors. That is…they are the picture lines. Not the black space between them. High quality monitors with small dot sizes that are well focused will have skinnier scan lines. Fat scan lines can result from poor focus, or a "hotter" electron beam with more electrons/second (brightness) and/or from electrons with higher kinetic energy (anode voltage too high leads to secondary scattering effects). Fatter scanlines can also result from a slower horizontal scan rate (with the resulting higher electron impact count per sub pixel).

I can't speak for SFII but the Nintendo games + Sanyo EZ do have at least some low pass filtering. I'm on my phone and can't really go into details, but a trivial example is the Sanyo's video inverter itself. I don't know what the filter cutoff frequency is (can't assemble complete signal path schematics on phone) but those degenerate common emitter amplifiers do have a bandwidth limit. Probably in the 5-10 MHz range.
 
Back
Top Bottom