lundi 16 mars 2026

MSX Cartridge.

Some months ago now, I decided to create a cartridge for MSX computers. I wasn't really sure which direction to take. I tried to keep it simple by using a fast processor, but the various tests I carried out yielded no reliable results.

So I ended up facing a dilemma: use a specialized processor like the Raspberry Pi Pico, or go with what seems quite inevitable to me, namely the use of an FPGA.

I don't appreciate Microsoft applications at all for a whole host of reasons, yet the most serious way to work on the Pico involves installing the necessary development environment within Visual Studio Code. Well, I'll never manage that. I just have to open the Microsoft application to be faced with all sorts of acronyms and logos, the kind I find on my washing machine, but in an A320 cockpit version, and I nearly keel over from sheer exhaustion.

So I chose a different approach, even though it's not the same thing at all. This time, I'm going with an FPGA. The advantage is that I can have more FLASH and SRAM memory to create my cartridge since I'm not constrained by the Pi Pico's internal resources. Furthermore, the FPGA will have a fast internal processor allowing me to manage the overall operation of the cartridge.

So yes, I prefer to challenge my brain with rewarding things, like learning VHDL and the whole Efinix FPGA environment, rather than constraining it with a vision of the Microsoft kind that holds no interest whatsoever.

As a result, the circuit I plan to build bears no resemblance to the early circuits and is far more complex. After doing a rough layout, I contacted specialists for the actual production of the circuit. The price is inevitably very high for this kind of 'personal DIY project', even more so in France. So, I still tried to see if I could manage to do something with my limited experience in the subject. This brings me to this result:


For now, I have absolutely no idea if I'll succeed in routing this circuit. In the past, I did the same kind of work on GoWin FPGAs, but it wasn't as difficult. The Efinix FPGA requires more external resources for its implementation.

After embarking on the adventure with Efinix FPGAs, particularly regarding the use of Efinix's internal soft processor, I must say I was on the flat. Now, the path is starting to get seriously steep...


dimanche 15 mars 2026

Efinix FPGA, Gusmanb Logic Analyser.

I have been experimenting with Efinix's development tools for several weeks now. Apart from the fact that overall I find these tools quite well-designed, efficient, and easy to use, the fact remains that some things don't seem very practical to me.

I replaced the built-in code editor with an external editor, namely Zed, which has a plugin for VHDL syntax highlighting. Because with the habits and automatic reflexes that develop, juggling the window parenting system of Windows becomes very unbearable very quickly, just like Windows itself, actually.
Once Zed was adopted, development with the Efinix tools became fluid and productive.


Another point to mention, but here I am issuing a warning. It might indeed be that I don't possess enough knowledge to use the integrated debugging tools properly. The fact remains that the internal Efinix logic analyzer always starts from the clock root to manage its sampling frequency, and always uses a clock source from the design as a reference. I haven't yet found a solution to sample at an arbitrary clock value. And I don't even know if it's possible. No, I haven't read all the documentation. But that's beside the point anyway.

The 'thing' is that it's relatively difficult to sample low-frequency signals. Which is the case for a serial link at 19200 x 16 baud. So, how can one practically verify the signal shape on a transmission pin?
Because now that I have managed to implement the transmission and reception for a ZILOG SIO, before continuing with the implementation of functions, I first need to verify that the various formats are correctly respected. That is to say, the parity and the number of STOP bits.

The ideal in this case is obviously to have an external analyzer. I do own one, but I never managed to get used to its PC interface, and I don't find it very easy to use. Especially since it is limited in channels and recording buffer size.

A few months ago, I came across a YouTube video presenting a logic analyzer based on a Raspberry Pico circuit. The project seemed to be progressing well until it hit the famous 'bug' in this circuit's IO ports. Then, I went back to look at the project when the Pico 2 circuit was released. The project had evolved to use this new component. It had reached the stage of proposing a printed circuit board containing all the necessary elements to serve as a 24-channel logic analyzer. Well, 24 is 16 + 8. This is precisely why all professional suppliers will never release hardware with this number of channels. Indeed, this number represents the amount required to be able to trace the bus of a standard 8-bit processor. Therefore, professionals will provide 16-channel devices at a 'reasonable' price with mediocre graphical interfaces, offering you to upgrade to the 32-channel version at a much higher cost. This commercial practice is 'rotten' in the medium term. Because, at some point or another, a 'heretic' will release hardware at a ridiculously low cost that will break this strategy.
And the 'heretic' in this case is Agustín Gimenez Bernad. This gentleman has developed a 24-channel logic analyzer, information for which you can find at this location: https://github.com/usmanb/logicanalyzer

As for me, I didn't even try to build the hardware. I found a ready-to-use circuit on AliExpress at the incredible price of €28 (03/2026). In fact, I acquired two units because, as if that wasn't enough, two circuits (or more, it seems) can be chained together to increase the number of channels. I haven't tried that yet.


The software is also found on the GitHub repository. In my case, I downloaded version 6.0. Knowing that the software also has a large number of decoding 'plugins', it is necessary to install Python to take advantage of them. The only small restriction is that the Python version must be 3.13 maximum. The current version is 3.14 (03/2026), but previous versions are still readily available without issue. 

In a word, this system, on paper, seems to tick all the usability boxes for me. And, with the SIO development underway, this is the perfect time to try out this logic analyzer.
Here is what the capture of the serial frame I send from the FPGA at system startup looks like. With the UART decoding configured, it's an absolute child's play to not only follow all the ASCII codes in the frame, but also, and most importantly, to visualize the effect of the parity configuration and the number of stop bits.


And so concludes the description of the process that led me to use this analyzer. Yes, because the whole thing is so well-made that starting it up is almost knowing how to use it. One or two tries and you've got it. The 'thing' is so simple, intuitive, and powerful that I have nothing else to write. Simply magic!

This brings me to a small point about the tools used to make the development of my FPGA projects easier:

  • Zed, for easy editing of FPGA, C, and other sources.
  • GTKWave for easy visualization of signals produced by the Efinix logic analyzer.
  • Agustín Gimenez Bernad's (magic) logic analyzer.

The cost of all these tools combined is practically zero, since only the logic analyzer hardware needs to be considered, and it is extremely low.

There are solutions under Linux, but for now I haven't switched to that OS. I used it for a very long time, but until now, only the Windows system allows for reasonably consistent updates of the system and tools. Which is not the case with Linux. And since I hate wasting time on things that aren't properly managed, well, following update troubles under Linux, I stopped working with this system suited either for computer gurus or basic secretaries, but certainly not for people like me. Unless I switch to Mac, ultimately the most reliable Linux on the market ;-)
 

jeudi 12 mars 2026

Cody the Computer and Its Keyboard.

A few months ago, I presented Frederick John Milens' initiative to build a small computer in the purest style of the 80s, closely resembling the Commodore 64.

https://www.codycomputer.org/

Since then, the new version of the Commodore 64, the Commodore 64 Ultimate, has been delivered by the brand new entity Commodore.net.

What differentiates Frederick's version is that his creation uses 'real' physical components, even though the video and sound functions, among others, are handled by a Propeller processor. His concept, although not advertised as C64 compatible, nevertheless remains exactly the same style of computer as the original C64.

The version created by the new Commodore entity, for its part, is based on an FPGA design. It is not my intention to prefer one version over the other. Both are interesting. I received the latest C64 and also built a copy of the Cody.

https://www.commodore.net/


Beyond the hardware work, Frederick has done a significant amount of documentation. In fact, the book available in PDF on his website makes it possible to understand absolutely all the hardware aspects of his machine. Additionally, his GitHub repository provides absolutely all the information necessary to build the machine yourself.

As far as I'm concerned, I have only one small criticism of his approach. It concerns the keyboard. I don't see anything interesting about building a keyboard yourself. Unless, of course, it's a very special keyboard like a processor board with a hex keypad. Otherwise, I prefer to use a standard USB keyboard, and besides, it costs less. However, the USB output of the keyboard obviously needs to be adapted. I've already built this type of interface in the past for an MSX machine. On the other hand, I'm not a fan of interfacing with the original input/output port. There can sometimes be synchronization issues.

So what to do for Cody? Well, since all the resources are available on Frederick's GitHub page (Github), I decided to try a different approach, which is to directly provide the received keyboard code onto the processor's data bus. 

For this, I created a small adapter printed circuit board that I will place on the I/O interface circuit socket. It will contain the interface circuit plus the connection to a small external board designed to provide a serial output corresponding to the key codes typed on the USB keyboard.


There's also the video output, which is a bit too 'retro' for my taste, but there are inexpensive converters for this type of video output to VGA. So, I'll maybe look into that later. Since I have put together VHDL resources for creating a relatively universal frame-buffer, I might be able to use these resources as part of the Cody project.

mardi 10 mars 2026

I really like AI!

I really appreciate the support of AI. When I think about the so-called education I received during my schooling! As a result, well, I'm making steady progress with VHDL. A few days ago, I tackled the Zilog SIO. This time, I started without the help of AI, only relying on basic personal knowledge and what I've recently learned thanks to AI's input. The result is an encouraging start in understanding how the SIO works. For now, I'm just getting the source code to work for an 8-bit data serial link, no parity, and one stop bit. The basics, basically. And well, the result is there:
 
 


In fact, this time I'm using AI not to gain a global understanding of the art of programming in VHDL, but to acquire additional syntax for information manipulation. Indeed, even though the RX and TX functions work, everything I've coded obviously includes all the necessary clock domain crossings and all the synchronization processes for the various processes.

It's indeed better to start the code this way because the SIO has different operating modes. For now, I'm dealing with the asynchronous mode, but I'll also need to manage the synchronous mode with flag detection, because that's the mode used on the EMU1 to manage the floppy disk drive.

Just to reliably manage the asynchronous mode, there's still a fair amount of code to generate and also a lot of testing. But still, it's totally satisfying to see the birth of functional VHDL code.

I must also add that, with practice, editing code using the Efinix software's VHDL editor is not the right solution. The main reason is that multi-windowing management is catastrophic. In this respect, what I'm observing aligns perfectly with my idea of development on Windows. Efinix probably uses a framework other than Microsoft's. Consequently, the problems related to that framework are added to those purely related to Windows management.

So I use Zed, which has a plugin for VHDL and can even perform some syntax checking. It's really, really good. I only use the Efinix IDE now to launch its 'standalone' tools, in fact : compilation, the various project configurations, test bench configuration, and using the internal signal analyzer. That way, I no longer have to worry about window inheritance because it's truly a nightmare on Windows, this thing!

mercredi 4 mars 2026

SDCC Z80 crt0.s & __GSINIT

Until now, I've been using SDCC without fully mastering its compilation and linking process. For simple applications on the Z80 processor, it usually worked fine, even if I sometimes encountered code that didn't behave exactly as I expected. By making simple workarounds in my C code, I could easily bypass the issue. But this time, the situation is different.

Indeed, I need to test the functionality of the Z80-style SIO that I am currently developing for an FPGA. Therefore, I need to be as confident as possible in the code produced by the compiler.

So I became interested in this 'famous' crt0.s initialization file. I have to admit that I had already looked into the matter but remained quite skeptical about how it works. Mainly regarding the method of organizing the different memory segments managed by SDCC.

No matter how I modified the order of the segments, the memory mapping never actually changed. As a result, I remained somewhat unclear on this subject. My code still worked, so it didn't really bother me. But now, it was time to address the issue. Along with the segment problem, there was also the matter of initializing the 'famous' global variables, especially if they need to be set to a specific value at application startup.

The 'famous' directives concerning memory segments in the crt0.S file :

And the correct result in this example :

Actually, I declared my code to start at 0x0110 and the data at 0x2000 because I have 8KB of program space available starting from address 0x0000, and the RAM starts at 0x2000. The output from the linking procedure correctly shows the __GSINIT routine located in the code area.

However, and this is where the problem lies, until I understood the various issues and their causes, I could never get this __GSINIT code to be present in the code segment—it always ended up in the data segment.

This __GSINIT procedure is supposed to contain the initialization code for global variables. So until now, I was initializing these variables directly in my code. I couldn't have them properly set BEFORE the program started. That's why I sometimes had to modify my code, fully understanding why the initialization wasn't happening at startup, but having absolutely no idea why I couldn't change this behavior.

It took me a few hours to understand the underlying problem, fix it, and finally refine the crt0.s code to place the segments in the right locations and write the correct initialization sequence for global variables.

And I must say that AI was of no help whatsoever in finding the root problem. I had to search for hours on the internet without finding an answer to this issue. And yet, there are plenty of people struggling with this!

In short, the problem is actually quite simple to solve. Once done, the logic behind the segments becomes very easy to understand and define.

After half a sleepless night, I finally have knowledge of this matter. Now I can move forward with testing the SIO...

lundi 2 mars 2026

I really like AI.

After a few weeks of practicing with AI to help me with coding, whether in VHDL or for developing Windows utilities, I must say that its contribution is very positive.

Of course, you have to adapt to its way of doing things. That means asking the right questions, and above all, structuring your own thinking in order to guide the AI engine towards the right options from the start. This allows the iterative process to proceed as efficiently as possible afterwards. 

That's how I was first able to create a small Windows application to convert binary files to the format required by the Efinix memory initialization tool. And without it really taking up much of my time, I integrated into this utility the ability to first run the 'make.bat' file for compiling the embedded Efinix project, before starting the conversion to the Efinix format :

Obviously, it doesn't have the aesthetics of modern applications. It only uses microsoft's basic API. But hey, it suits me perfectly and fully meets my needs.

I also developed another small application for working with serial connections. When I remember how much time it took me to achieve roughly the same result back in the early 2000s...

Another subject. I've been using Notepad++ for writing code for my embedded projects for years now. I did try Visual Studio, the interface that 'everyone' has been using for years. Sorry, but I can't stand this style of interface, which wastes a huge amount of intellectual 'bandwidth' while pretending to be a great help to developers, and which I consider to be a fantastic mind-jammer.

And then I just discovered ZED. The perfect thing. Simple, with the directory tree displayed on the left. The dark theme easy to configure: everything I like. However, I wanted an icon somewhere on some toolbar, capable of launching make.bat directly without having to go through the DOS command line. Impossible! Hmm... Hence the integration of this option directly into my conversion utility. And there you have it, in two clicks of the 'mouse' I save and directly generate the file ready to be integrated into the Efinix tool. 


So I end up with extremely simple and practical tools to carry out my experiments. So obviously, as I'm currently working on creating the SIO Z80 in VHDL to be interfaced with a Z80 core also in VHDL, I'm not able to perform real-time debugging, but I have the logic analyzer integrated into the Efinix tool to carry out the initial tests. Once the RX/TX part of the SIO is validated, I'll be able to use it to output debugging information to the screen.

This leads me to a few reflections on AI and, first of all, Microsoft. You might think I see evil everywhere, but I've always believed that Microsoft spends so much on research and development not for what they claim—that is, 'making the developer's life easier'—but quite the opposite, to overcomplicate it. While indeed giving the impression of working towards that goal. To do this, Microsoft 'cultivates' perpetual instability. That of applications, that of the associated documentation, and of course that of development tools. One of the tactics used to maintain its pseudo-technological lead.

When you really look at it, what's new since the release of the Macintosh? Huh? Tell me!
Since the late 80s, this way of doing things at Microsoft has fostered in me a deep disgust for the entire 'Microsoftian' ecosystem. Even though I mostly use development tools that run on Windows.

AI allows me to bypass part of these obstructionist strategies put in place by Microsoft. So, we're not yet at the 'all visual' stage, but it's true that recovering functional code from sentences in French, in my case, closely resembles what I mistakenly imagined Visual Studio would be when Microsoft released that IDE. I thought the development methodology would indeed be visual, meaning with less basic (and unnecessary) code to enter, but spending more time entering code with higher added value. Well no, it was just the development interface that switched to windowed mode. Right....

Hence the succession of development frameworks that have come and gone, except perhaps QT. In short, gigabytes to download, two years of training to use the tools, and undrinkable documentation. The quite useless and very low-quality work transferred to French universities in computer science departments, or what was in France School 42, affectionately nicknamed 'the pool'.

As a result, this 'French excellence' no longer makes sense either. And it's part of the university model that is going to, or rather already is, sinking into the abyss of obsolescence. Honestly, where are we going, if any old 'peasant' can train themselves and produce as well as our master's degree graduates!

A layered reflection: the French system is based on a caste system where anyone not from the lower or middle bourgeoisie is excluded from 'potential' success (other than financial success, which is reserved for the upper bourgeoisie) thanks to the watchdog that is the French education system. So then, if this 'no way' can be circumvented thanks to AI, what will happen? Well, actually it's very simple and it has always existed in France but is increasingly taking over from the educational exclusion system that is gradually becoming inefficient, simply through increasingly tight filtering of entrepreneurial initiative, and increasingly effective administrative violence towards those who shouldn't be there. Hence a seemingly quite probable vision of the road France is taking...

It's really time to look elsewhere... 

 

mercredi 11 février 2026

Drumulator OS : done!

And yes, the Drumulator system is finally working in the Trion FPGA. Actually, I managed to get the Drumulator system running in an FPGA—Altera at the time—a few years ago already, but by simulating only one interrupt on CTC channel 1, the channel responsible for scanning the display and keyboard.

I knew from then on that to properly implement this drum machine's system, I had to fully and correctly implement the Zilog CTC in the FPGA. Meaning with the entire IRQ recognition system, interrupt vector delivery, and IRQ enable management.

In the meantime, I turned my attention to other, easier subjects. And then, after experimenting with GoWin FPGAs, I decided to switch to Efinix circuits—simply because they are much cheaper than GoWin ones. 

And I have to say, the philosophy behind Efinix's development system actually suits me well. For example, I had never implemented a fast 'proprietary processor' in an FPGA before. I was able to implement Efinix's Saphir processor without much difficulty. The only small point that gave me some trouble was implementing the JTAG interface for this soft processor. Afterwards, I tried using the debugging tools in the Efinix software without success. Once again, I was referring to other FPGA manufacturers and looking for a similar approach. Then one day, without really overthinking it, I simply set up a debug session and launched the built-in logic analyzer. These tools are very simple to use and effective.

After that, I wondered which tool to use to visualize the captured signals. GTKWave—and it's perfect:


I was able to test the CTC's operation using the internal Saphir processor I had implemented, by sending configuration data and observing the CTC's behavior. I was aware that I couldn't test everything—especially the Z80 system's interrupt recognition and acknowledge mechanism. That's where the internal logic analyzer proved to be extremely helpful. It allowed me to verify signal timing, which is essential in a system that uses multiple interrupts coming from the same source.

Still, the system behavior remained completely erratic. But I knew that an uninitialized data RAM could cause the system to go haywire.

So I pre-initialized the data RAM to 0xFF, and this time the system seemed stable. In order to properly format the loading data—whether for ROM or RAM—I wrote a small utility to easily process the required files:


Since I hate wasting time with Microsoft tools that don't address standard needs at all, require years of learning, and stray far from real-world issues, I simply asked the AI to generate the skeleton. I got it in about 30 minutes—just enough time to clarify a few points and let the AI algorithm adapt. And there it is!

At that point, all I had left to do was connect the keyboard and test the Drumulator's initialization sequence: ERASE / CASSETTE / ENTER.

And voilà — the data RAM properly initialized, and a Drumulator running flawlessly.

I still need to implement the data potentiometer control system, since it also uses the CTC. And knowing that I've already implemented the drum machine's waveform sequencer, I can now say I have all the fundamental building blocks to rebuild a Drumulator — and hopefully improve it in one way or another.

At startup, the unit should display 01E, as shown in this image taken from the web:


And here is the result on the control panel I created for testing:


The photo isn't great, but hey, the main thing is there ;-)

I tested a few possible actions using the user manual. The display behaves exactly as described in the manual. So I now consider the implementation of the Drumulator core in FPGA to be validated.

And if I sum up the work done over the past two or three months:
  • Development of a minimal VGA interface prototype.
  • Development of the MSX cartridge prototype (PCB prototype in progress).
  • Porting of the Drumulator core to FPGA.
All of this on Efinix Trion FPGAs — I'm quite pleased with myself!