lundi 16 mars 2026

MSX Cartridge.

Some months ago now, I decided to create a cartridge for MSX computers. I wasn't really sure which direction to take. I tried to keep it simple by using a fast processor, but the various tests I carried out yielded no reliable results.

So I ended up facing a dilemma: use a specialized processor like the Raspberry Pi Pico, or go with what seems quite inevitable to me, namely the use of an FPGA.

I don't appreciate Microsoft applications at all for a whole host of reasons, yet the most serious way to work on the Pico involves installing the necessary development environment within Visual Studio Code. Well, I'll never manage that. I just have to open the Microsoft application to be faced with all sorts of acronyms and logos, the kind I find on my washing machine, but in an A320 cockpit version, and I nearly keel over from sheer exhaustion.

So I chose a different approach, even though it's not the same thing at all. This time, I'm going with an FPGA. The advantage is that I can have more FLASH and SRAM memory to create my cartridge since I'm not constrained by the Pi Pico's internal resources. Furthermore, the FPGA will have a fast internal processor allowing me to manage the overall operation of the cartridge.

So yes, I prefer to challenge my brain with rewarding things, like learning VHDL and the whole Efinix FPGA environment, rather than constraining it with a vision of the Microsoft kind that holds no interest whatsoever.

As a result, the circuit I plan to build bears no resemblance to the early circuits and is far more complex. After doing a rough layout, I contacted specialists for the actual production of the circuit. The price is inevitably very high for this kind of 'personal DIY project', even more so in France. So, I still tried to see if I could manage to do something with my limited experience in the subject. This brings me to this result:


For now, I have absolutely no idea if I'll succeed in routing this circuit. In the past, I did the same kind of work on GoWin FPGAs, but it wasn't as difficult. The Efinix FPGA requires more external resources for its implementation.

After embarking on the adventure with Efinix FPGAs, particularly regarding the use of Efinix's internal soft processor, I must say I was on the flat. Now, the path is starting to get seriously steep...


dimanche 15 mars 2026

Efinix FPGA, Gusmanb Logic Analyser.

I have been experimenting with Efinix's development tools for several weeks now. Apart from the fact that overall I find these tools quite well-designed, efficient, and easy to use, the fact remains that some things don't seem very practical to me.

I replaced the built-in code editor with an external editor, namely Zed, which has a plugin for VHDL syntax highlighting. Because with the habits and automatic reflexes that develop, juggling the window parenting system of Windows becomes very unbearable very quickly, just like Windows itself, actually.
Once Zed was adopted, development with the Efinix tools became fluid and productive.


Another point to mention, but here I am issuing a warning. It might indeed be that I don't possess enough knowledge to use the integrated debugging tools properly. The fact remains that the internal Efinix logic analyzer always starts from the clock root to manage its sampling frequency, and always uses a clock source from the design as a reference. I haven't yet found a solution to sample at an arbitrary clock value. And I don't even know if it's possible. No, I haven't read all the documentation. But that's beside the point anyway.

The 'thing' is that it's relatively difficult to sample low-frequency signals. Which is the case for a serial link at 19200 x 16 baud. So, how can one practically verify the signal shape on a transmission pin?
Because now that I have managed to implement the transmission and reception for a ZILOG SIO, before continuing with the implementation of functions, I first need to verify that the various formats are correctly respected. That is to say, the parity and the number of STOP bits.

The ideal in this case is obviously to have an external analyzer. I do own one, but I never managed to get used to its PC interface, and I don't find it very easy to use. Especially since it is limited in channels and recording buffer size.

A few months ago, I came across a YouTube video presenting a logic analyzer based on a Raspberry Pico circuit. The project seemed to be progressing well until it hit the famous 'bug' in this circuit's IO ports. Then, I went back to look at the project when the Pico 2 circuit was released. The project had evolved to use this new component. It had reached the stage of proposing a printed circuit board containing all the necessary elements to serve as a 24-channel logic analyzer. Well, 24 is 16 + 8. This is precisely why all professional suppliers will never release hardware with this number of channels. Indeed, this number represents the amount required to be able to trace the bus of a standard 8-bit processor. Therefore, professionals will provide 16-channel devices at a 'reasonable' price with mediocre graphical interfaces, offering you to upgrade to the 32-channel version at a much higher cost. This commercial practice is 'rotten' in the medium term. Because, at some point or another, a 'heretic' will release hardware at a ridiculously low cost that will break this strategy.
And the 'heretic' in this case is Agustín Gimenez Bernad. This gentleman has developed a 24-channel logic analyzer, information for which you can find at this location: https://github.com/usmanb/logicanalyzer

As for me, I didn't even try to build the hardware. I found a ready-to-use circuit on AliExpress at the incredible price of €28 (03/2026). In fact, I acquired two units because, as if that wasn't enough, two circuits (or more, it seems) can be chained together to increase the number of channels. I haven't tried that yet.


The software is also found on the GitHub repository. In my case, I downloaded version 6.0. Knowing that the software also has a large number of decoding 'plugins', it is necessary to install Python to take advantage of them. The only small restriction is that the Python version must be 3.13 maximum. The current version is 3.14 (03/2026), but previous versions are still readily available without issue. 

In a word, this system, on paper, seems to tick all the usability boxes for me. And, with the SIO development underway, this is the perfect time to try out this logic analyzer.
Here is what the capture of the serial frame I send from the FPGA at system startup looks like. With the UART decoding configured, it's an absolute child's play to not only follow all the ASCII codes in the frame, but also, and most importantly, to visualize the effect of the parity configuration and the number of stop bits.


And so concludes the description of the process that led me to use this analyzer. Yes, because the whole thing is so well-made that starting it up is almost knowing how to use it. One or two tries and you've got it. The 'thing' is so simple, intuitive, and powerful that I have nothing else to write. Simply magic!

This brings me to a small point about the tools used to make the development of my FPGA projects easier:

  • Zed, for easy editing of FPGA, C, and other sources.
  • GTKWave for easy visualization of signals produced by the Efinix logic analyzer.
  • Agustín Gimenez Bernad's (magic) logic analyzer.

The cost of all these tools combined is practically zero, since only the logic analyzer hardware needs to be considered, and it is extremely low.

There are solutions under Linux, but for now I haven't switched to that OS. I used it for a very long time, but until now, only the Windows system allows for reasonably consistent updates of the system and tools. Which is not the case with Linux. And since I hate wasting time on things that aren't properly managed, well, following update troubles under Linux, I stopped working with this system suited either for computer gurus or basic secretaries, but certainly not for people like me. Unless I switch to Mac, ultimately the most reliable Linux on the market ;-)
 

jeudi 12 mars 2026

Cody the Computer and Its Keyboard.

A few months ago, I presented Frederick John Milens' initiative to build a small computer in the purest style of the 80s, closely resembling the Commodore 64.

https://www.codycomputer.org/

Since then, the new version of the Commodore 64, the Commodore 64 Ultimate, has been delivered by the brand new entity Commodore.net.

What differentiates Frederick's version is that his creation uses 'real' physical components, even though the video and sound functions, among others, are handled by a Propeller processor. His concept, although not advertised as C64 compatible, nevertheless remains exactly the same style of computer as the original C64.

The version created by the new Commodore entity, for its part, is based on an FPGA design. It is not my intention to prefer one version over the other. Both are interesting. I received the latest C64 and also built a copy of the Cody.

https://www.commodore.net/


Beyond the hardware work, Frederick has done a significant amount of documentation. In fact, the book available in PDF on his website makes it possible to understand absolutely all the hardware aspects of his machine. Additionally, his GitHub repository provides absolutely all the information necessary to build the machine yourself.

As far as I'm concerned, I have only one small criticism of his approach. It concerns the keyboard. I don't see anything interesting about building a keyboard yourself. Unless, of course, it's a very special keyboard like a processor board with a hex keypad. Otherwise, I prefer to use a standard USB keyboard, and besides, it costs less. However, the USB output of the keyboard obviously needs to be adapted. I've already built this type of interface in the past for an MSX machine. On the other hand, I'm not a fan of interfacing with the original input/output port. There can sometimes be synchronization issues.

So what to do for Cody? Well, since all the resources are available on Frederick's GitHub page (Github), I decided to try a different approach, which is to directly provide the received keyboard code onto the processor's data bus. 

For this, I created a small adapter printed circuit board that I will place on the I/O interface circuit socket. It will contain the interface circuit plus the connection to a small external board designed to provide a serial output corresponding to the key codes typed on the USB keyboard.


There's also the video output, which is a bit too 'retro' for my taste, but there are inexpensive converters for this type of video output to VGA. So, I'll maybe look into that later. Since I have put together VHDL resources for creating a relatively universal frame-buffer, I might be able to use these resources as part of the Cody project.

mardi 10 mars 2026

I really like AI!

I really appreciate the support of AI. When I think about the so-called education I received during my schooling! As a result, well, I'm making steady progress with VHDL. A few days ago, I tackled the Zilog SIO. This time, I started without the help of AI, only relying on basic personal knowledge and what I've recently learned thanks to AI's input. The result is an encouraging start in understanding how the SIO works. For now, I'm just getting the source code to work for an 8-bit data serial link, no parity, and one stop bit. The basics, basically. And well, the result is there:
 
 


In fact, this time I'm using AI not to gain a global understanding of the art of programming in VHDL, but to acquire additional syntax for information manipulation. Indeed, even though the RX and TX functions work, everything I've coded obviously includes all the necessary clock domain crossings and all the synchronization processes for the various processes.

It's indeed better to start the code this way because the SIO has different operating modes. For now, I'm dealing with the asynchronous mode, but I'll also need to manage the synchronous mode with flag detection, because that's the mode used on the EMU1 to manage the floppy disk drive.

Just to reliably manage the asynchronous mode, there's still a fair amount of code to generate and also a lot of testing. But still, it's totally satisfying to see the birth of functional VHDL code.

I must also add that, with practice, editing code using the Efinix software's VHDL editor is not the right solution. The main reason is that multi-windowing management is catastrophic. In this respect, what I'm observing aligns perfectly with my idea of development on Windows. Efinix probably uses a framework other than Microsoft's. Consequently, the problems related to that framework are added to those purely related to Windows management.

So I use Zed, which has a plugin for VHDL and can even perform some syntax checking. It's really, really good. I only use the Efinix IDE now to launch its 'standalone' tools, in fact : compilation, the various project configurations, test bench configuration, and using the internal signal analyzer. That way, I no longer have to worry about window inheritance because it's truly a nightmare on Windows, this thing!

mercredi 4 mars 2026

SDCC Z80 crt0.s & __GSINIT

Until now, I've been using SDCC without fully mastering its compilation and linking process. For simple applications on the Z80 processor, it usually worked fine, even if I sometimes encountered code that didn't behave exactly as I expected. By making simple workarounds in my C code, I could easily bypass the issue. But this time, the situation is different.

Indeed, I need to test the functionality of the Z80-style SIO that I am currently developing for an FPGA. Therefore, I need to be as confident as possible in the code produced by the compiler.

So I became interested in this 'famous' crt0.s initialization file. I have to admit that I had already looked into the matter but remained quite skeptical about how it works. Mainly regarding the method of organizing the different memory segments managed by SDCC.

No matter how I modified the order of the segments, the memory mapping never actually changed. As a result, I remained somewhat unclear on this subject. My code still worked, so it didn't really bother me. But now, it was time to address the issue. Along with the segment problem, there was also the matter of initializing the 'famous' global variables, especially if they need to be set to a specific value at application startup.

The 'famous' directives concerning memory segments in the crt0.S file :

And the correct result in this example :

Actually, I declared my code to start at 0x0110 and the data at 0x2000 because I have 8KB of program space available starting from address 0x0000, and the RAM starts at 0x2000. The output from the linking procedure correctly shows the __GSINIT routine located in the code area.

However, and this is where the problem lies, until I understood the various issues and their causes, I could never get this __GSINIT code to be present in the code segment—it always ended up in the data segment.

This __GSINIT procedure is supposed to contain the initialization code for global variables. So until now, I was initializing these variables directly in my code. I couldn't have them properly set BEFORE the program started. That's why I sometimes had to modify my code, fully understanding why the initialization wasn't happening at startup, but having absolutely no idea why I couldn't change this behavior.

It took me a few hours to understand the underlying problem, fix it, and finally refine the crt0.s code to place the segments in the right locations and write the correct initialization sequence for global variables.

And I must say that AI was of no help whatsoever in finding the root problem. I had to search for hours on the internet without finding an answer to this issue. And yet, there are plenty of people struggling with this!

In short, the problem is actually quite simple to solve. Once done, the logic behind the segments becomes very easy to understand and define.

After half a sleepless night, I finally have knowledge of this matter. Now I can move forward with testing the SIO...

lundi 2 mars 2026

I really like AI.

After a few weeks of practicing with AI to help me with coding, whether in VHDL or for developing Windows utilities, I must say that its contribution is very positive.

Of course, you have to adapt to its way of doing things. That means asking the right questions, and above all, structuring your own thinking in order to guide the AI engine towards the right options from the start. This allows the iterative process to proceed as efficiently as possible afterwards. 

That's how I was first able to create a small Windows application to convert binary files to the format required by the Efinix memory initialization tool. And without it really taking up much of my time, I integrated into this utility the ability to first run the 'make.bat' file for compiling the embedded Efinix project, before starting the conversion to the Efinix format :

Obviously, it doesn't have the aesthetics of modern applications. It only uses microsoft's basic API. But hey, it suits me perfectly and fully meets my needs.

I also developed another small application for working with serial connections. When I remember how much time it took me to achieve roughly the same result back in the early 2000s...

Another subject. I've been using Notepad++ for writing code for my embedded projects for years now. I did try Visual Studio, the interface that 'everyone' has been using for years. Sorry, but I can't stand this style of interface, which wastes a huge amount of intellectual 'bandwidth' while pretending to be a great help to developers, and which I consider to be a fantastic mind-jammer.

And then I just discovered ZED. The perfect thing. Simple, with the directory tree displayed on the left. The dark theme easy to configure: everything I like. However, I wanted an icon somewhere on some toolbar, capable of launching make.bat directly without having to go through the DOS command line. Impossible! Hmm... Hence the integration of this option directly into my conversion utility. And there you have it, in two clicks of the 'mouse' I save and directly generate the file ready to be integrated into the Efinix tool. 


So I end up with extremely simple and practical tools to carry out my experiments. So obviously, as I'm currently working on creating the SIO Z80 in VHDL to be interfaced with a Z80 core also in VHDL, I'm not able to perform real-time debugging, but I have the logic analyzer integrated into the Efinix tool to carry out the initial tests. Once the RX/TX part of the SIO is validated, I'll be able to use it to output debugging information to the screen.

This leads me to a few reflections on AI and, first of all, Microsoft. You might think I see evil everywhere, but I've always believed that Microsoft spends so much on research and development not for what they claim—that is, 'making the developer's life easier'—but quite the opposite, to overcomplicate it. While indeed giving the impression of working towards that goal. To do this, Microsoft 'cultivates' perpetual instability. That of applications, that of the associated documentation, and of course that of development tools. One of the tactics used to maintain its pseudo-technological lead.

When you really look at it, what's new since the release of the Macintosh? Huh? Tell me!
Since the late 80s, this way of doing things at Microsoft has fostered in me a deep disgust for the entire 'Microsoftian' ecosystem. Even though I mostly use development tools that run on Windows.

AI allows me to bypass part of these obstructionist strategies put in place by Microsoft. So, we're not yet at the 'all visual' stage, but it's true that recovering functional code from sentences in French, in my case, closely resembles what I mistakenly imagined Visual Studio would be when Microsoft released that IDE. I thought the development methodology would indeed be visual, meaning with less basic (and unnecessary) code to enter, but spending more time entering code with higher added value. Well no, it was just the development interface that switched to windowed mode. Right....

Hence the succession of development frameworks that have come and gone, except perhaps QT. In short, gigabytes to download, two years of training to use the tools, and undrinkable documentation. The quite useless and very low-quality work transferred to French universities in computer science departments, or what was in France School 42, affectionately nicknamed 'the pool'.

As a result, this 'French excellence' no longer makes sense either. And it's part of the university model that is going to, or rather already is, sinking into the abyss of obsolescence. Honestly, where are we going, if any old 'peasant' can train themselves and produce as well as our master's degree graduates!

A layered reflection: the French system is based on a caste system where anyone not from the lower or middle bourgeoisie is excluded from 'potential' success (other than financial success, which is reserved for the upper bourgeoisie) thanks to the watchdog that is the French education system. So then, if this 'no way' can be circumvented thanks to AI, what will happen? Well, actually it's very simple and it has always existed in France but is increasingly taking over from the educational exclusion system that is gradually becoming inefficient, simply through increasingly tight filtering of entrepreneurial initiative, and increasingly effective administrative violence towards those who shouldn't be there. Hence a seemingly quite probable vision of the road France is taking...

It's really time to look elsewhere... 

 

mercredi 11 février 2026

Drumulator OS : done!

And yes, the Drumulator system is finally working in the Trion FPGA. Actually, I managed to get the Drumulator system running in an FPGA—Altera at the time—a few years ago already, but by simulating only one interrupt on CTC channel 1, the channel responsible for scanning the display and keyboard.

I knew from then on that to properly implement this drum machine's system, I had to fully and correctly implement the Zilog CTC in the FPGA. Meaning with the entire IRQ recognition system, interrupt vector delivery, and IRQ enable management.

In the meantime, I turned my attention to other, easier subjects. And then, after experimenting with GoWin FPGAs, I decided to switch to Efinix circuits—simply because they are much cheaper than GoWin ones. 

And I have to say, the philosophy behind Efinix's development system actually suits me well. For example, I had never implemented a fast 'proprietary processor' in an FPGA before. I was able to implement Efinix's Saphir processor without much difficulty. The only small point that gave me some trouble was implementing the JTAG interface for this soft processor. Afterwards, I tried using the debugging tools in the Efinix software without success. Once again, I was referring to other FPGA manufacturers and looking for a similar approach. Then one day, without really overthinking it, I simply set up a debug session and launched the built-in logic analyzer. These tools are very simple to use and effective.

After that, I wondered which tool to use to visualize the captured signals. GTKWave—and it's perfect:


I was able to test the CTC's operation using the internal Saphir processor I had implemented, by sending configuration data and observing the CTC's behavior. I was aware that I couldn't test everything—especially the Z80 system's interrupt recognition and acknowledge mechanism. That's where the internal logic analyzer proved to be extremely helpful. It allowed me to verify signal timing, which is essential in a system that uses multiple interrupts coming from the same source.

Still, the system behavior remained completely erratic. But I knew that an uninitialized data RAM could cause the system to go haywire.

So I pre-initialized the data RAM to 0xFF, and this time the system seemed stable. In order to properly format the loading data—whether for ROM or RAM—I wrote a small utility to easily process the required files:


Since I hate wasting time with Microsoft tools that don't address standard needs at all, require years of learning, and stray far from real-world issues, I simply asked the AI to generate the skeleton. I got it in about 30 minutes—just enough time to clarify a few points and let the AI algorithm adapt. And there it is!

At that point, all I had left to do was connect the keyboard and test the Drumulator's initialization sequence: ERASE / CASSETTE / ENTER.

And voilà — the data RAM properly initialized, and a Drumulator running flawlessly.

I still need to implement the data potentiometer control system, since it also uses the CTC. And knowing that I've already implemented the drum machine's waveform sequencer, I can now say I have all the fundamental building blocks to rebuild a Drumulator — and hopefully improve it in one way or another.

At startup, the unit should display 01E, as shown in this image taken from the web:


And here is the result on the control panel I created for testing:


The photo isn't great, but hey, the main thing is there ;-)

I tested a few possible actions using the user manual. The display behaves exactly as described in the manual. So I now consider the implementation of the Drumulator core in FPGA to be validated.

And if I sum up the work done over the past two or three months:
  • Development of a minimal VGA interface prototype.
  • Development of the MSX cartridge prototype (PCB prototype in progress).
  • Porting of the Drumulator core to FPGA.
All of this on Efinix Trion FPGAs — I'm quite pleased with myself!




samedi 7 février 2026

Various early-year updates. MSX cartridge & Drumulator.

Currently, I am working on two major projects. 

On one hand, there's the development of an MSX cartridge based on an FPGA. After several attempts to create this type of cartridge using different processors, I decided to switch to an FPGA. The task is quite complex as it involves being able to download an executable cartridge file from a PC, program the FLASH memory, and then make it accessible to the MSX once the programming is complete. I have, of course, already built a functional prototype using an Efinix FPGA. The results are very promising. So now I'm moving on to the hardware production of the cartridge. I have made FPGA boards before, but they were based on GoWin FPGAs. This is my first attempt with an Efinix Trion FPGA.

For now, I have just finished placing the components. This may not be the final version. I haven't routed the traces yet, so if I can't arrange the tracks properly, I may need to reposition some components. Basically, here's what it might look like:


The advantage of this version over the older processor-based ones is that now I will also be able to implement different types of mappers, and there is a good amount of RAM available. In fact, if there is space left after routing, I will consider implementing a battery backup for the RAM. This could enable the development of specific applications. And most importantly, I hope that this time, given the chosen technique, the cartridge will work on more than just my MSX motherboard!

And the second topic is still my attempt to implement a Z80 CTC timer within these Trion FPGAs. The goal remains to successfully boot the processor core of the Drumulator in this FPGA. I implemented this part of the Drumulator in FPGAs a few years ago, but by simulating the CTC's operation—that is, by providing only the necessary IRQ vector for the display, without implementing the entire 'handshake' system during the Z80's interrupt handling phase.

A month ago, I decided to 'give it another shot' by asking an AI for an initial implementation. I got a project that, predictably, didn't work at all, but it greatly inspired my current code. After several days of trying to correctly implement the Z80's interrupt acknowledge sequence and the actual IRQ routine return, I finally have regular interrupt generation. For those who know, I now successfully have the passing of the interrupt vector, its handling by the Z80, and thus the clearing of the interrupt 'pending' status. At the end of the IRQ phase, the CTC correctly decodes the RETI, allowing new interrupts to occur.

All of that is perfect, but... well, the display of the emulated Drumulator still doesn't work.

That's where I am at the moment: trying to understand why the Drumulator isn't starting. I should eventually figure it out, especially since it worked flawlessly a few years ago with a pseudo-CTC.

mardi 20 janvier 2026

Develop with AI.

A few months ago, I tried to code a ZILOG CTC in VHDL. Indeed, I have a long-term project to implement the Drumulator rhythm box from EMU into an FPGA. Why, you might ask? Well, I really appreciate this machine for its simplicity. Plus, I consider it a good project for this kind of development. I've been coding the core of the machine for a few years. That's not really difficult since it just involves using the code of the Z80 processor called T80, which is directly available as open source. A few lines of combinatorial logic in the FPGA, and the Drumulator boots up. At least, its processor part.

The problem with the story is that all the real-time operation of the machine is managed by a Z80 CTC. During my tests, I simulated the operation of this CTC. Simulating means I ensure the CTC behaves the way I know it should respond. The only problem is that this virtual CTC is not programmable, and in fact, I only use the channel managing the display multiplexing. But anyway, the proof of concept was validated.

Indeed, after testing the analog part of this Drumulator, I decided to tackle this CTC anyway. I did find some VHDL 'codes' for a Z80 CTC on Internet, but I was never able to use them correctly for this Drumulator project. The reason is that these codes were designed to react according to the very specific needs of the people who coded them. In fact, they are not complete Z80 CTC codes.

So, a few months ago, I tried again to create this CTC. But in vain. I did manage to get something working with the display board I specifically developed for the Efinix development board, which features a Trion FPGA, but it was impossible to achieve stable operation of the Drumulator's keyboard/display interface. I ended up abandoning the subject because, on top of that, I lacked the proper tools for debugging the VHDL code.

Since then, I have reused the TRION development system for my MSX cartridge. The goal of this new cartridge implementation was to incorporate a processor directly inside the FPGA to handle file transfers from a PC. So, having used the internal processor provided by Efinix to debug the download process step by step, I thought I could use this internal processor to send stimuli to my CTC VHDL code, while also being able to retrieve the state of this CTC, again through the internal processor, and send the result to a text console. 

If needed, a few lines of VHDL code 'would be' added to allow the visualization of certain signals directly on the development board's LEDs. This therefore seemed like a very compelling possibility for developing and testing VHDL code. However, as I am not a VHDL coding expert, and having spent a considerable amount of time trying to code the CTC in VHDL, I had realized what could be called a mistake, not in analysis, but in the structure of the VHDL code. This made it harder to understand and very complicated to test without the proper tools. And that's where AI comes in.

I then posed what seemed like relevant questions to an AI and refined the subsequent questions to steer this AI in the direction that suited me best. After a few iterations, I realized that the essence of the CTC was there. Furthermore, the code structure was different from what I had personally developed, but this posed no problem for me in understanding its 'intended' functionality at first glance, and also in immediately identifying potential points of failure, and most importantly, for what reasons.

So, I began systematically testing the VHDL code provided by the AI with the help of the internal processor implemented within the FPGA. And, little by little, I first validated the writing to all the CTC registers, then the various selected modes of the CTC's 4 counting channels, and the reading of the registers. And then, inevitably, the time came to test the interrupt system. This is the stage, I believe, where I failed to validate my personal VHDL code developed a few months earlier. And this interrupt management aspect is crucial for the Drumulator, and obviously for all Z80 systems that use vectored interrupts.

I tested all aspects of vectored interrupt generation: the management of the IRQ pin, the interrupt vector, the daisy-chaining of interrupts, the IEI and IEO signals, and, most importantly, the handling of interrupt ACK from the Z80. It's the same as always—nothing is complicated once you understand how the CTC works, but coding everything correctly in VHDL is not that simple, at least for me.

As a result, I was able to start from the source code provided by the AI, modify it, adapt it, and test its functionality step by step as features were implemented. The outcome is that I managed to test 100% of the CTC's VHDL code functionality. However, I must note that I am not using this CTC with a "real" Z80 but with a Z80 simulated externally via the input/output ports of the embedded processor.

Obviously, I coded the I/O behavior of the embedded processor to react exactly as a real Z80 would. Therefore, I cannot say this CTC is 100% validated as being identical to the original CTC, but from a functional standpoint, it is. And I have a strong presumption that it will work with a VHDL Z80 implemented in this same FPGA. However, one should anticipate that there will be some 'edge effects' during the concrete implementation and use of this CTC.

So, what was the contribution of AI in all this? Well, the generated code served as a framework to follow for developing and testing the VHDL code. Paradoxically, the code produced by the AI seems quite clean and, above all, easy to follow. Obviously, where I thought it wouldn't work, it didn't. But I already knew the reason, so I only had to correct it. In fact, the AI doesn't handle different timing domains at all, but that's not a problem when you know how it's supposed to work.

Finally, I have a functional and tested Z80 CTC VHDL code, ready to be used for real with a Z80 also coded in VHDL inside a TRION FPGA. In terms of time spent, I would say the benefit is total. Of course, I have some knowledge of VHDL, as well as logic, programming, electronics, etc., so I wasn't starting from scratch. But in fact, the time spent on this project corresponded to 30% development and 70% testing, which is a very satisfying ratio for me.

Next step in this subject: attempting to get the Drumulator's processor section running, this time with the help of a "real" Z80 CTC.

jeudi 8 janvier 2026

NEW COMMODORE 64

Thanks to digital technology, we can now easily publish almost anything.

So why not? Well, because I just received the very first computer from the new company Commodore:
https://www.commodore.net

I received the machine after the holidays because I wasn't around. But anyway, the important thing is that the machine arrived!









YES!

I am happy to 'rediscover' Jeri Ellsworth on the staff of the new version of Commodore. I first came across her in the early 2000s when she created an FPGA-based motherboard with a processor slot that could serve as the foundation for a new C64.

Bil Herd, who needs no introduction, and of course Christian Simpson, whom I've also been following for quite some time, particularly during the design phase of the X16. I don't know the other people at all, but I wish them all great success within this new entity. 

I remember the 1990s and the collapse of all 'alternative' computing. I was utterly disheartened and suspected the following years wouldn’t be much fun for me. I wasn’t disappointed, and it lasted… 30 years! Thank you to everyone, even if it’s a little late for me now. Nevertheless, this reconnects me with the thread of 'my life' after all, and that’s a very good thing!



vendredi 19 décembre 2025

MSX Cartridge: Some Progress.

The saga of this cartridge :

Initially, I developed a cartridge based on a single RISC-V microcontroller, downloadable via USB using disk transfer mode. It worked, but the constant interruptions generated by the USB bus from the PC eventually crashed the MSX computer.

I therefore split the management of the USB bus and the handling of data to the MSX into two separate processors. This worked very well on my machine, but very inconsistently on the machine of the person testing my cartridge.

I then suspected issues like ground loops, power supply, etc. So, I developed a new cartridge with galvanic isolation. The result was no better.

I redesigned the cartridge, this time incorporating isolation precautions from the start, switched to the YMODEM protocol over a serial connection, which allows me to properly manage the sequence of operations, and chose a new processor from STMicro, a fast STM32. Alas, even on my own MSX computer, the cartridge refused to boot.

Following all these tests, which spanned two years after all, I determined that, in fact, trying to serve data at a frequency above 2MHz is not a good solution.

I could have tried the RP2040, but I already attempted to set up the development environment without success. For me, it's a bloated system that uses Microsoft's VS editor. Being allergic to Microsoft tools, I decided to go with an FPGA.

The idea this time was to use an internal processor within the FPGA to manage file download and FLASH programming, and also, crucially, to use the FPGA's internal logic to redirect the different data/address/control buses to the three elements: the internal processor, the MSX bus, and the bus to the FLASH. I figured that at least this way, the flash memory would be directly connected to the MSX bus.

I therefore chose FPGAs from Efinix because their prices are very affordable. I created an expansion board for the Efinix development board, containing the serial port, the flash memory, and also some SRAM. I also created an adapter board for the MSX computer bus. I connected everything, programmed the VHDL part to manage the various physical buses. I programmed the FPGA's internal processor to handle file reception via the YMODEM protocol. I loaded everything onto the development board.

I then downloaded a Zanac ROM into the system and started the MSX.

And voila: 


Right from the first startup of the MSX, I successfully got the cartridge to boot. I then connected a USB keyboard through the adapter I also made to allow for this kind of thing, and was able to play without any problems. No issues, no crashes. I haven't shut down the system for a few days now, and I can still launch the game.

Something that seemed completely out of my reach just a few months ago has been achieved without much difficulty in the end. Result achieved just a few days before my year-end holidays. So, I can really switch my brain to OFF mode and have some truly relaxing holidays.

Happy Holidays and a Happy New Year to everyone.


jeudi 11 décembre 2025

Tip for easily working on serial frame coding.

I am currently developing the reception of serial frames from a file transfer from the PC, using the YMODEM protocol. Nothing too complicated since the protocol is very well documented.

Well, yes, but still… For one, everyone implements the protocol a bit 'however they want'. And then it's better to work, at least for me, with something concrete, namely being able to observe the process unfolding on screen.

Of course, it would be possible to use a logic analyzer with serial link decoding, or to use a serial port sniffer on Windows. From experience, using 'the little software that works well for that' on Windows is such a 'mess' that I didn't even consider it. Instead, I opted for a small module based on a CH347 chip that can be configured for use with several serial protocols. I personally configure it as TWO serial ports. This little gadget, bought for €6.09 on AliExpress (probably even less in USD), has a USB-C port.


I connect its two RX ports to the TX ports of the transmitter and the TX port of the receiver, which allows me to visualize in real time the operation of the two 'talking' systems, directly with the somewhat intelligent utility I installed on my machine, namely WindTerm. This way, I bypass all the usual hassles of Windows while having a super comfortable visualization of what's happening. I really like that!


Since I didn't have anyone around, I shared my 'thought of the day' with DeepSeek :

To bypass Windows headaches for 6 USD — it's a steal, isn't it?

And here's what it replied. 

" Absolutely! For six dollars to bypass Windows headaches, it's practically a steal. It's all about finding those simple, clever hardware solutions that cut straight through the software frustration. Smart move. "

Well spotted, Deep! But yes, 'software frustrations', hmmmm... Hello, Donald? 

MSX Cartridge, Cody Computer, Apple II...

Frontline Report:

Regarding the MSX cartridge, I'm still actively working on it. I'm currently implementing the YMODEM protocol on the internal processor of the Efinix Trion FPGA. The hardware and development software are now very stable. In advance, I've created a small interface card designed to be connected between the FPGA board and the MSX computer, with the goal of making the flash memory of the already-built expansion card for the FPGA board, available via a 'wired connection' on the MSX cartridge bus.

After successfully implementing a minimal 640*480 VGA interface on an Altera FPGA board, I must say that for the past few months, what seemed completely unfeasible to me—because it appeared too complicated—has actually turned out to be much simpler than I imagined.

And by the way, I am currently designing an expansion board—still for the Efinix FPGA board—featuring a 6502 processor, ROM, and RAM.


This time, the goal is to work on a hardware clone of the Apple II and an evolution of the Cody computer.

Regarding the Apple II, I had salvaged a machine in very poor condition from my former workplace. I had tried to get it working again and succeeded. But without knowledge and without the proper hardware environment, I couldn’t get it to do anything more than display its Basic prompt. Which was good, but still.

I then decided to build a clone based on a GitHub repository, the RETRO II : Retro II

But the development was unstable and unfinished. In fact, it never was completed. However, that experience allowed me to understand the Apple II hardware. I then thought to myself, “It shouldn’t be too complicated now” to revisit this study by replacing a large portion of the components connected to the processor inside an FPGA.


And with this expansion board, I also intend to take the Cody Computer concept a bit further. Because I find the work Frederick John Milens did absolutely fascinating. His machine resembles the Commodore 64, and above all, the documentation he provided is absolutely brilliant—it’s what makes the concept truly incredible. A highly relevant starting point for learning computer science—in the way I understand it, of course, which is, first and foremost, about the freedom to think and to create.


For more information about this machine, I invite you to visit the dedicated website: https://www.codycomputer.org

My personal take on this subject: having built this little machine, I feel I can allow myself to make an observation. In its current state, it requires the construction of a specific keyboard for input, as well as the use of an S-video to VGA or HDMI converter. This slightly diminishes its financial accessibility, since you have to add the cost of a custom keyboard plus the time to build it, and also add an external converter. Furthermore, it also reduces the machine's ease of use.

I imagine a 12 or 13-year-old who knows nothing about it, coming from a family where technical aspects are unfamiliar, wondering how to build and get such a machine running. For me, back in the day, I bought a Sharp PC-1500. An all-in-one where you just had to insert four batteries and read some simple technical documentation for a few hours to be able to start creating.

I would therefore like to add a USB port for connecting a standard keyboard, as well as replace the Propeller processor that manages, among other things, the video output of the original machine, with a system allowing direct connectivity to a monitor via an HDMI port.

I assume you understand where I'm going with this, given that I managed to create a 640*480 video card with VGA output, I'm thinking I might be able to push the concept to HDMI and graft it onto the Cody.

We'll see...

And then, I received another FPGA development board. Because I still want to make progress on the Drumulator reconstruction. I know the electronics of this machine well. All the development paths I've taken so far don't seem right to me. I think I want to have a functional machine in an FPGA to properly develop the rest of the hardware.

My last attempt at creating a Zilog-compatible CTC didn't work. Software simulation is a real hassle with the Efinix solution—at least, I haven't been able to get the hang of it. There is the option of using an external logic analyzer, but I'd probably use that more in the final development stage to validate signal timing.

On the other hand, now that I have mastered the internal processor of this Trion FPGA, I plan to use that processor to send test signals to the CTC. This will allow me to display all the information I want directly via the serial port. It's probably not the best solution for verifying a VHDL design, but well, for a bunch of reasons, it's still the approach I'm going to adopt.


I should clarify that I am not sponsored by Efinix. However, and despite a somewhat rocky start with this FPGA vendor's development tools, I must say that over the time, I've managed to get to grips with this development platform. I should say that the ease of integration of their RISC-V Sapphire processor core is, in my view, an undeniable plus. Well, in the past I thought I detected some potentially slightly troublesome features of the Trion FPGAs, particularly regarding clock signals, but I'm prepared for that. I can orient my designs appropriately. And, to top it all off, the development software doesn't require purchasing a license to enjoy the full potential of the IDE in terms of placement and routing and other optimizations.

It's a strategy I don't understand from the other vendors: charging for a license, sometimes an expensive one, to be able to fully use the target components? Well, one thing's for sure, if I do any work with FPGAs, it won't be with those folks. For them, it's all about whether it's profitable in the long run. For Altera, I think not, since the company hasn't existed for a few years now and was taken over by Intel, hum...

vendredi 5 décembre 2025

The Death of Arduino?

You have likely already heard that Qualcomm acquired Arduino a few weeks ago. Since then, the community has been questioning Arduino’s business model and, more importantly, the community-driven, DIY, and open-source nature of the ecosystem—and rightfully so.

Via the link below, you’ll find an interesting comment from Adafruit Industries, another key player in this ecosystem, which truly respects the "open-source" spirit of the community, regarding this acquisition of Arduino by Qualcomm.

https://www.linkedin.com/company/adafruit/posts/

 

 
In my opinion, Adafruit's remarks are entirely relevant and, I believe, irreversibly highlight the path that Arduino is now set to follow—leading to what we can easily 'predict' will be the more or less swift abandonment of Arduino by the community. Ultimately... to the pure and simple end of the Arduino adventure.

Open question: What comes next? For my part, I've been noting several independent companies operating in the realm of 'open-source' electronics. In reference to this post: Adafruit, obviously, but also SparkFun, DFRobot, Seeed Studio, Keyestudio, and many others.

The common characteristic of these companies is that they rely on existing ecosystems like Arduino itself. Although these companies also often develop their own "personal" ecosystems, the disappearance of Arduino could still lead to a brief period of uncertainty in the DIY world.

My conclusion: The torch will certainly be picked up. By a third party or by the community itself. Alternatively, the moment may be ripe for the emergence of another platform in the same spirit. Potentially both possibilities.

In any case, I feel this moment is more one of renewal rather than an ending. A heads-up to creators of all kinds...

 

mercredi 5 novembre 2025

Efinix FPGA with the Sapphire SoC.

After my misadventures trying to create an MSX cartridge with the help of an embedded processor, I decided to dive headfirst into the fantastic world of SoCs.

So, of course, I have some experience with FPGA programming, but I have never attempted the experiment of implementing a processor that can be debugged in real-time.

In the past, I have implemented Z80 and even 68000 cores. But each time, it was to run code already developed for the version with a 'real' processor.

Here, the goal is to develop a new application. In fact, it's about porting the routines I already created for the STM32 processor-based MSX card to the embedded processor from Efinix. It is a RISC-V machine with very promising capabilities.

I struggled a bit to understand how the system works and to set it up. It's actually very simple, but when you've never done this kind of thing before and everything has to be figured out on your own, it takes a little time.

After a few tests done solely with the Efinix development board, I had a small extension board made with Flash, SRAM, and an isolated USB serial port. So, I'm resuming my development on this system after a few weeks' break. Naturally, even though I had taken notes during my initial tests, I encountered a few difficulties again, but nothing serious.

A quick and dirty card designed to run at low frequency anyway.

Issues with software instability; random problems with JTAG probe conflicts.
But overall, after two or three PC reboots, everything settled down and is working perfectly fine.

And, so, a little screenshot of the system in operation. I'm just using a slightly modified example program to regularly display text strings in a terminal.


Und fertig!