vendredi 3 avril 2026

MSX cartridge, it's progressing...

On this cartridge, I'm at the stage of trying to create the PCB. In my last post on this subject, I was at the beginning of creating the PCB and I had absolutely no idea whether I would manage to make something clean. Well, now I have the answer: 


All the tracks are routed. I still have a bit of work to do on improving the routing, with no major functional impact — more about streamlining. While working, I switched to version 10 of KiCad. I've known KiCad for over 20 years. A lot of things have improved, but I feel that since it went into 'semi-professional' mode, there are ways of doing things that have more to do with professional software than with solutions accessible to the greatest number of people.

A small message to the KiCad developers: By all means, improve its operation and features, but don't turn it into a clone of professional solutions that require months of training and practice to create an amateur circuit. You're now moving a bit too far away from the accessibility that has made KiCad successful so far. As for me, I think I won't update KiCad any further, especially since dark theme is finally available with version 10.

Of course, I have absolutely no idea whether this board will work or not. This is my first implementation of an Efinix FPGA. But you have to start somewhere. And, as a result, this allows me to compare this type of FPGA with equivalent FPGAs from Gowin. While it's true that Gowin FPGAs are much more expensive than Efinix FPGAs, the 144-pin Gowin version equivalent to the Efinix version used here, requires far fewer external components to get it running and is, in my opinion, easier to implement on a circuit. The Gowin module I made worked fine with just a two-layer board. Here, I was forced to go with a four-layer board.

If this Efinix FPGA implementation works, I will nevertheless have a functional template for implementing the component. This will allow me to create more complex designs more easily in the future with these components. 

lundi 30 mars 2026

The Drumulator in a FPGA.

Might as well say it right away: it works.

And to specify what works: well, actually, everything!

That is to say, when I press the 'pads', I do get the correct sounds.

Quick summary of the background: a few years ago now, I decided to 'transfer' an entire Drumulator into an FPGA. Well, for the digital part, obviously.

And to be exhaustive, my adventures with FPGAs started a long time ago. I had stopped in 2014 with the integration of a Prophet VS's processor board inside an Altera FPGA. At that time, the work was relatively simple, since it involved assembling distinct IP blocks. But obviously, when it came to creating the missing components from scratch, that was a whole different story. I had to get into the 'hard' part of VHDL coding (the language I primarily use). So I had chosen to start with what seemed to me at the time to be a simpler machine than the Prophet VS, namely the Drumulator.

In reality, it wasn't any simpler. There too, everything went very well as long as I emulated the Zilog CTC with a super simple interrupt generator that actually simulated the operation of the vectorized interrupt system. Obviously, that wasn't going to be enough because the CTC generates multiple interrupts, so it was absolutely necessary to manage the Z80's vectorized interrupt system.

It was from that moment that the troubles started again. I had to once again confront VHDL.

The fact is, I have no formal training in the subject, to begin with. Doing logic isn't very complicated, but switching to the formalism of a hardware description language like VHDL requires an 'approach' that I didn't have. Basically, I didn't know where to start to truly get into this subject.

And that's when AI allowed me to do it. In fact, I used it as a teacher. Now obviously, I knew a bit of VHDL, which helped me a lot. As for the AI, I had at my disposal a teacher who never got tired of answering, had answers to almost everything, was available 24/7, and practiced no form of social class discrimination, and therefore no filtering in the acquisition of knowledge. Which is obviously contrary to what I experienced throughout my schooling in France, since this country practices social discrimination endemically. Coming from the very bottom layer of French society, I had no chance of accessing anything remotely advanced in electronics. The French education system respected its social objective to the letter.

AI therefore allowed me to bypass this system and, assuming I possess at least the minimum capacity to level up, with the training provided by AI, I managed in three or four months to get past a major hurdle.

I close this parenthesis with this magnificent image of the development system used for my FPGA work:


So, for now, I haven't implemented an analog output multiplexing system. The Drumulator uses the time switching principle to generate its eight voices. Each output is reconstructed by an operational amplifier followed by its small holding capacitor. Very standard. Except that I'm using serial buses to manage this whole system. The DAC is not a parallel data bus DAC but works using an SPI bus. The same goes for the final volume control circuit and the final audio channel switching.

Moreover, the sequencing system that generates the sound waves works in a fully pipelined manner. Managing the other serial buses will be done by adding stages to this pipeline. The only thing I paid attention to is that the circuits used are fast enough so that all the information has time to be transmitted between each new sample output.

The sound produced by my system is slightly different from the sound produced by the Drumulator. On one hand, I don't have the analog filtering system, and above all, I'm not reading the samples at the exact frequency of the Drumulator, and actually, I'm not entirely sure that the values used to translate the 8‑bit samples into 12 bits for the DAC are completely correct.

A few years ago, I implemented the Drumulator's sound generation system on an ARM processor, just to get familiar with the companding system it uses. I had manually calculated the conversion table based on the original component documentation. To go all the way,  this time I also asked the AI to calculate this table for me. I must admit the result is quite convincing.

To illustrate all this, I'll try to make a short video with sound...

lundi 16 mars 2026

MSX Cartridge.

Some months ago now, I decided to create a cartridge for MSX computers. I wasn't really sure which direction to take. I tried to keep it simple by using a fast processor, but the various tests I carried out yielded no reliable results.

So I ended up facing a dilemma: use a specialized processor like the Raspberry Pi Pico, or go with what seems quite inevitable to me, namely the use of an FPGA.

I don't appreciate Microsoft applications at all for a whole host of reasons, yet the most serious way to work on the Pico involves installing the necessary development environment within Visual Studio Code. Well, I'll never manage that. I just have to open the Microsoft application to be faced with all sorts of acronyms and logos, the kind I find on my washing machine, but in an A320 cockpit version, and I nearly keel over from sheer exhaustion.

So I chose a different approach, even though it's not the same thing at all. This time, I'm going with an FPGA. The advantage is that I can have more FLASH and SRAM memory to create my cartridge since I'm not constrained by the Pi Pico's internal resources. Furthermore, the FPGA will have a fast internal processor allowing me to manage the overall operation of the cartridge.

So yes, I prefer to challenge my brain with rewarding things, like learning VHDL and the whole Efinix FPGA environment, rather than constraining it with a vision of the Microsoft kind that holds no interest whatsoever.

As a result, the circuit I plan to build bears no resemblance to the early circuits and is far more complex. After doing a rough layout, I contacted specialists for the actual production of the circuit. The price is inevitably very high for this kind of 'personal DIY project', even more so in France. So, I still tried to see if I could manage to do something with my limited experience in the subject. This brings me to this result:


For now, I have absolutely no idea if I'll succeed in routing this circuit. In the past, I did the same kind of work on GoWin FPGAs, but it wasn't as difficult. The Efinix FPGA requires more external resources for its implementation.

After embarking on the adventure with Efinix FPGAs, particularly regarding the use of Efinix's internal soft processor, I must say I was on the flat. Now, the path is starting to get seriously steep...


dimanche 15 mars 2026

Efinix FPGA, Gusmanb Logic Analyser.

I have been experimenting with Efinix's development tools for several weeks now. Apart from the fact that overall I find these tools quite well-designed, efficient, and easy to use, the fact remains that some things don't seem very practical to me.

I replaced the built-in code editor with an external editor, namely Zed, which has a plugin for VHDL syntax highlighting. Because with the habits and automatic reflexes that develop, juggling the window parenting system of Windows becomes very unbearable very quickly, just like Windows itself, actually.
Once Zed was adopted, development with the Efinix tools became fluid and productive.


Another point to mention, but here I am issuing a warning. It might indeed be that I don't possess enough knowledge to use the integrated debugging tools properly. The fact remains that the internal Efinix logic analyzer always starts from the clock root to manage its sampling frequency, and always uses a clock source from the design as a reference. I haven't yet found a solution to sample at an arbitrary clock value. And I don't even know if it's possible. No, I haven't read all the documentation. But that's beside the point anyway.

The 'thing' is that it's relatively difficult to sample low-frequency signals. Which is the case for a serial link at 19200 x 16 baud. So, how can one practically verify the signal shape on a transmission pin?
Because now that I have managed to implement the transmission and reception for a ZILOG SIO, before continuing with the implementation of functions, I first need to verify that the various formats are correctly respected. That is to say, the parity and the number of STOP bits.

The ideal in this case is obviously to have an external analyzer. I do own one, but I never managed to get used to its PC interface, and I don't find it very easy to use. Especially since it is limited in channels and recording buffer size.

A few months ago, I came across a YouTube video presenting a logic analyzer based on a Raspberry Pico circuit. The project seemed to be progressing well until it hit the famous 'bug' in this circuit's IO ports. Then, I went back to look at the project when the Pico 2 circuit was released. The project had evolved to use this new component. It had reached the stage of proposing a printed circuit board containing all the necessary elements to serve as a 24-channel logic analyzer. Well, 24 is 16 + 8. This is precisely why all professional suppliers will never release hardware with this number of channels. Indeed, this number represents the amount required to be able to trace the bus of a standard 8-bit processor. Therefore, professionals will provide 16-channel devices at a 'reasonable' price with mediocre graphical interfaces, offering you to upgrade to the 32-channel version at a much higher cost. This commercial practice is 'rotten' in the medium term. Because, at some point or another, a 'heretic' will release hardware at a ridiculously low cost that will break this strategy.
And the 'heretic' in this case is Agustín Gimenez Bernad. This gentleman has developed a 24-channel logic analyzer, information for which you can find at this location: https://github.com/usmanb/logicanalyzer

As for me, I didn't even try to build the hardware. I found a ready-to-use circuit on AliExpress at the incredible price of €28 (03/2026). In fact, I acquired two units because, as if that wasn't enough, two circuits (or more, it seems) can be chained together to increase the number of channels. I haven't tried that yet.


The software is also found on the GitHub repository. In my case, I downloaded version 6.0. Knowing that the software also has a large number of decoding 'plugins', it is necessary to install Python to take advantage of them. The only small restriction is that the Python version must be 3.13 maximum. The current version is 3.14 (03/2026), but previous versions are still readily available without issue. 

In a word, this system, on paper, seems to tick all the usability boxes for me. And, with the SIO development underway, this is the perfect time to try out this logic analyzer.
Here is what the capture of the serial frame I send from the FPGA at system startup looks like. With the UART decoding configured, it's an absolute child's play to not only follow all the ASCII codes in the frame, but also, and most importantly, to visualize the effect of the parity configuration and the number of stop bits.


And so concludes the description of the process that led me to use this analyzer. Yes, because the whole thing is so well-made that starting it up is almost knowing how to use it. One or two tries and you've got it. The 'thing' is so simple, intuitive, and powerful that I have nothing else to write. Simply magic!

This brings me to a small point about the tools used to make the development of my FPGA projects easier:

  • Zed, for easy editing of FPGA, C, and other sources.
  • GTKWave for easy visualization of signals produced by the Efinix logic analyzer.
  • Agustín Gimenez Bernad's (magic) logic analyzer.

The cost of all these tools combined is practically zero, since only the logic analyzer hardware needs to be considered, and it is extremely low.

There are solutions under Linux, but for now I haven't switched to that OS. I used it for a very long time, but until now, only the Windows system allows for reasonably consistent updates of the system and tools. Which is not the case with Linux. And since I hate wasting time on things that aren't properly managed, well, following update troubles under Linux, I stopped working with this system suited either for computer gurus or basic secretaries, but certainly not for people like me. Unless I switch to Mac, ultimately the most reliable Linux on the market ;-)
 

jeudi 12 mars 2026

Cody the Computer and Its Keyboard.

A few months ago, I presented Frederick John Milens' initiative to build a small computer in the purest style of the 80s, closely resembling the Commodore 64.

https://www.codycomputer.org/

Since then, the new version of the Commodore 64, the Commodore 64 Ultimate, has been delivered by the brand new entity Commodore.net.

What differentiates Frederick's version is that his creation uses 'real' physical components, even though the video and sound functions, among others, are handled by a Propeller processor. His concept, although not advertised as C64 compatible, nevertheless remains exactly the same style of computer as the original C64.

The version created by the new Commodore entity, for its part, is based on an FPGA design. It is not my intention to prefer one version over the other. Both are interesting. I received the latest C64 and also built a copy of the Cody.

https://www.commodore.net/


Beyond the hardware work, Frederick has done a significant amount of documentation. In fact, the book available in PDF on his website makes it possible to understand absolutely all the hardware aspects of his machine. Additionally, his GitHub repository provides absolutely all the information necessary to build the machine yourself.

As far as I'm concerned, I have only one small criticism of his approach. It concerns the keyboard. I don't see anything interesting about building a keyboard yourself. Unless, of course, it's a very special keyboard like a processor board with a hex keypad. Otherwise, I prefer to use a standard USB keyboard, and besides, it costs less. However, the USB output of the keyboard obviously needs to be adapted. I've already built this type of interface in the past for an MSX machine. On the other hand, I'm not a fan of interfacing with the original input/output port. There can sometimes be synchronization issues.

So what to do for Cody? Well, since all the resources are available on Frederick's GitHub page (Github), I decided to try a different approach, which is to directly provide the received keyboard code onto the processor's data bus. 

For this, I created a small adapter printed circuit board that I will place on the I/O interface circuit socket. It will contain the interface circuit plus the connection to a small external board designed to provide a serial output corresponding to the key codes typed on the USB keyboard.


There's also the video output, which is a bit too 'retro' for my taste, but there are inexpensive converters for this type of video output to VGA. So, I'll maybe look into that later. Since I have put together VHDL resources for creating a relatively universal frame-buffer, I might be able to use these resources as part of the Cody project.

mardi 10 mars 2026

I really like AI!

I really appreciate the support of AI. When I think about the so-called education I received during my schooling! As a result, well, I'm making steady progress with VHDL. A few days ago, I tackled the Zilog SIO. This time, I started without the help of AI, only relying on basic personal knowledge and what I've recently learned thanks to AI's input. The result is an encouraging start in understanding how the SIO works. For now, I'm just getting the source code to work for an 8-bit data serial link, no parity, and one stop bit. The basics, basically. And well, the result is there:
 
 


In fact, this time I'm using AI not to gain a global understanding of the art of programming in VHDL, but to acquire additional syntax for information manipulation. Indeed, even though the RX and TX functions work, everything I've coded obviously includes all the necessary clock domain crossings and all the synchronization processes for the various processes.

It's indeed better to start the code this way because the SIO has different operating modes. For now, I'm dealing with the asynchronous mode, but I'll also need to manage the synchronous mode with flag detection, because that's the mode used on the EMU1 to manage the floppy disk drive.

Just to reliably manage the asynchronous mode, there's still a fair amount of code to generate and also a lot of testing. But still, it's totally satisfying to see the birth of functional VHDL code.

I must also add that, with practice, editing code using the Efinix software's VHDL editor is not the right solution. The main reason is that multi-windowing management is catastrophic. In this respect, what I'm observing aligns perfectly with my idea of development on Windows. Efinix probably uses a framework other than Microsoft's. Consequently, the problems related to that framework are added to those purely related to Windows management.

So I use Zed, which has a plugin for VHDL and can even perform some syntax checking. It's really, really good. I only use the Efinix IDE now to launch its 'standalone' tools, in fact : compilation, the various project configurations, test bench configuration, and using the internal signal analyzer. That way, I no longer have to worry about window inheritance because it's truly a nightmare on Windows, this thing!

mercredi 4 mars 2026

SDCC Z80 crt0.s & __GSINIT

Until now, I've been using SDCC without fully mastering its compilation and linking process. For simple applications on the Z80 processor, it usually worked fine, even if I sometimes encountered code that didn't behave exactly as I expected. By making simple workarounds in my C code, I could easily bypass the issue. But this time, the situation is different.

Indeed, I need to test the functionality of the Z80-style SIO that I am currently developing for an FPGA. Therefore, I need to be as confident as possible in the code produced by the compiler.

So I became interested in this 'famous' crt0.s initialization file. I have to admit that I had already looked into the matter but remained quite skeptical about how it works. Mainly regarding the method of organizing the different memory segments managed by SDCC.

No matter how I modified the order of the segments, the memory mapping never actually changed. As a result, I remained somewhat unclear on this subject. My code still worked, so it didn't really bother me. But now, it was time to address the issue. Along with the segment problem, there was also the matter of initializing the 'famous' global variables, especially if they need to be set to a specific value at application startup.

The 'famous' directives concerning memory segments in the crt0.S file :

And the correct result in this example :

Actually, I declared my code to start at 0x0110 and the data at 0x2000 because I have 8KB of program space available starting from address 0x0000, and the RAM starts at 0x2000. The output from the linking procedure correctly shows the __GSINIT routine located in the code area.

However, and this is where the problem lies, until I understood the various issues and their causes, I could never get this __GSINIT code to be present in the code segment—it always ended up in the data segment.

This __GSINIT procedure is supposed to contain the initialization code for global variables. So until now, I was initializing these variables directly in my code. I couldn't have them properly set BEFORE the program started. That's why I sometimes had to modify my code, fully understanding why the initialization wasn't happening at startup, but having absolutely no idea why I couldn't change this behavior.

It took me a few hours to understand the underlying problem, fix it, and finally refine the crt0.s code to place the segments in the right locations and write the correct initialization sequence for global variables.

And I must say that AI was of no help whatsoever in finding the root problem. I had to search for hours on the internet without finding an answer to this issue. And yet, there are plenty of people struggling with this!

In short, the problem is actually quite simple to solve. Once done, the logic behind the segments becomes very easy to understand and define.

After half a sleepless night, I finally have knowledge of this matter. Now I can move forward with testing the SIO...