XMOS xcore RISC-V

New 19Dec2022, updated 27Sep2023 (readability, I didn’t understand my own phrases, [2], [1], ChatGPT). In work. Read first but I suggest, don’t press the links. On any second read, you’re allowed to press the links. This note is my first reaction to the surprising and bold switch by XMOS, to let the xcore architecture run an extended RISC-V instruction set.

Intro

Will the new track attract new users?

A disclaimer here is that I write about matters which I have only remotely knowledge of. Where I would easily think that the last speaker was right. «Ok, that sounded like a reasonable argument». The last (and only) speaker is XMOS:

A week ago this mailed appeared in my inbox (Mail of 12Dec2022 (read in browser)):

«XMOS announces software-defined SoC platform now compatible with RISC-V.

Fourth generation xcore® architecture delivers software-defined silicon to RISC-V users – ready to accelerate development of the intelligent IoT.

XMOS today reveals a RISC-V compatible architecture for the fourth generation of its xcore platform. The collaboration delivers the flexibility to define entire systems in software, enabling RISC-V programmers to rapidly realise the most differentiated and economical solutions to the intelligent IoT.»

What can I say, except that some times a switch is necessary to get on moving.

I don’t know how much work it has been to implement the X4, provided you have the X3 with a proprietary instruction set. I guess this, to some degree, is dependent on whether the X1-X3 is microcoded and if the X4 is so – whether XMOS has gone for a microcoded RISC-V. It looks like the latter is possible (here). I didn’t think the present processors were microcoded. Then going from one microcoded architecture to another would perhaps be less of a chore than any other alternative, I assume? Assuming they haven’t thrown (too m)any babies out with the bathwater? (20Jan2023: I posted this at XCore Exchange point 1).

22Jan2023: Here’s what ChatGPT Feb13 version by OpenAI answered on «Is RISC-V considered to be microcoded?»:

«RISC-V is not microcoded in the traditional sense, as it does not use a microcode engine to execute instructions. Instead, RISC-V uses a «straight-through» pipeline, where each instruction is executed directly in the hardware.

However, some RISC-V processors may use microcode to implement complex instructions, such as floating-point operations, but this is not a fundamental aspect of the architecture itself.

Overall, RISC-V’s simplified instruction set and regular encoding allow for efficient implementation in hardware, without requiring a microcode engine to execute instructions.»

Update 03Mar2023. A quote from Max Maxfield‘s article [1]:

..the 4th generation of the XCORE architecture, will be RISC-V compatible. This is HUGE news. They had thought of doing this with their 3rd generation architecture but—at that time (circa 2017)—they felt the widespread adoption of RISC-V was not inevitable.

06Jun2023: In [2] I read that «..a RISC-V compatible Xcore—not a RISC-V Xcore, Lippett said..». Interesting! More:

«The XMOS architecture has effectively been “skinned” with RISC-V. This means effectively including the RISC-V instruction set architecture (ISA) on top of the existing core design. Since RISC-V allows extensibility, that doesn’t mean XMOS can’t also add its secret sauce. .. «From the hardware perspective, it was a relatively easy change, we didn’t give anything away in terms of the benefits of the Xcore,” he said. “There’s a register file change and instruction coding changes. The biggest challenge for us was moving an enormous amount of verification infrastructure onto a new ISA.” .. XMOS can also benefit from the tools available in the RISC-V ecosystem, which Lippett said gives customers many more choices than what XMOS could achieve alone. .. XMOS is building a RISC-V compatible chip on the new 4th-generation architecture, which it plans to sample around the end of this year.» 

My not so much .s-file xcore

I have seen the xcore architecture by studying the hardware architecture and using it with the xC programming language. And then trying to write about what I thought I saw: My XMOS pages. Recently I have used lib_dsp and there seen some xC code, but the lib_dsp mostly contains .s assembly code (which the assembler uses). I am still at generation X2 and using the obsoleted xTIMEcomposer 14.4.1. I still haven’t needed to relate to the dual issue instruction bundling which started with X2. I have read and written some about the X3, which they also call xcore.ai (hardware manual here).  And the XTC Tools I have only tested a little, even if I now use Microsoft Visual Studio Code (VSCode) for 90% of my editing these days. I love it, even if it isn’t as folded as I’d like (Wishes for a folding editor). The Inmos F and later the WinF that I used for my occam files in the nineties left a diamond in my view of what a program editor should be.

Anyhow, I haven’t really related to the instruction set of the xcore. But I have felt its grandeur. I did list some points at 141:[xCore instruction set]. I did do some assembly coding early on in my career, but I soon left it for PL/M-51, and «soon» means early eighties.

In other words, my .s-file xcore wasn’t really in my vocabulary.

My RISC-V xcore

Will the large obey the order?

XMOS lists up a lot of pros in the downloadable whitepaper «USING RISC-V TO DEFINE SOCS IN SOFTWARE» (via signed download). I like what I read. The result may be that as I’ll be involved in X4 (xcore 400) in the future (and skip X3?) then all the ready-made tools for the RISC-V may make the instruction set more visible? Wikipedia: RISC-V

There are a few things that I think are lost cases, but where I could at least hope. I’d hope for a full support of all the three task models in some rev2 of the xC compiler:

Three task models

  1. Standard tasks (like occam tasks). This will of course be the standard. One task per logical tile
  2. Combinable tasks on one core. How are 64 GPIO pins per tile going to be utilised if we don’t have combinable tasks? Observe that all of these pins needn’t be time critical. If no combinable task then I am afraid that the (rather few) standard tasks may become too large, with too many unrelated jobs to do – instead of doing divide and conquer to make more and smaller tasks. (Occam had arrays of processes, here). This is about the balance between external coupling and inner cohesion. 16 standard tasks is fine, but not if half of them are used for the libraries needed to go from ASIC parts to SW defined SOCS silicon. Would there be any RISC-V HART instruction that could link together select for different tasks and then make combinable possible?
  3. Distributable tasks where inline calls take the burden. A distributable task piggy-backs on the mother tasks to such a degree that sending a message to them is a call to a function, running on the same call stack

See CPA 2018 fringe (where even hard-core occam people thought that this was nice), but also more more struggling with the implementations at My xC combined combinable notes.

Of course I’d also like to see:

  1. xC interface. These are, unlike channels, typed. Plus they implement «safe» (deadlock free) communication sessions between clients and a server at a rather high level. By the way, CEO Mark Lippett‘s statement that «We have never claimed binary compatibility, we’ve always asked our customers to recompile their code, that will remain the case as we move from an xcore instruction set to a RISC-V instruction set«. May this indicate that they will indeed port the xC compiler? Or does he disregard the years of xC completely? (XCore Exchange point 1.2, I also asked this in CousinItt’s thread on 22Jan2023.)

My way

Will the one’s foundation become more enlightened?

I had fun at work over 40++ years, even if all I programmed were unicore microprocessors that were more or less peripheralised. However, a task concept was essential early on (Notes from the vault – 0x05 – RTX-51, an embedded scheduler). The highlight of this was occam on transputers. I published some stuff, and I guess, invented a deadlock free channel, the XCHAN (2013 at Publications).

What I would not like to do in the future (in my home office) is to buy an ESP32-C board and code in C. It is the opposite to xcore RISC-V. Same basic instruction sets, but crammed with peripherals. Boring.

xcore RISC-V > ESP32-C

Then, all these nice ARM boards. I must admit, I already have several with picked code via Arduino. Some ARMs targeted at my desk probably are multicore these days. Still, as I see it. compared to xcore RISC-V, boring:

xcore RISC-V > ARM even with ∞ Arduino

I cant’ wait to get my grips on the XTC Tools and the RISC-V Tools. Depending on them also being free. Plus, the new timing analysis for these deterministic xcore machines. No cache! XMOS must have had a second thought on the future of the xta tool, like an ad for a development engineer recently suggested. Perhaps even timing analysis across communication, task to task?

I guess that XMOS would say that since they now port freeRTOS then I don’t need combinable tasks, not even xC’s interface. They may be right; but it certainly is an abstraction up. Some competitors would run Zephyr on their boards (My Zephyr RTOS notes), which is very nice. But nothing like an xcore RISC-V with hardware threads on the metal. I just’ cant’ wait!

Salt?

I assume that XMOS would say that it was good that I started with their board, the startKIT. My first commit on the aquarium project was 21May2017. The fishes have lived there since 25Nov2017. My present project has 12000 xC lines, most of them my own: My Beep-BRRR notes (movie). Even I, being retired and sitting here alone, have a code base (My xC code downloads page). Then, a year ago the xTIMEcomposer 14.4.1 had not been updated for a year. I still use it. The XTC with lib_xcore, even how well envisaged it may be, is something else. From one angle: salt in the wound.

Had I been in a company I would have felt somewhat scared; not knowing what to tell my boss. I talked one of them into transputers and occam once. (On the brink I even went to Bristol and met the white collared guys that thought they were selling something else than their transputers that we placed in the numerous ships’ engine rooms.) Even if I have seen all these machines with legacy tools in people’s offices. You get a new machine, but that old system is with you, on that machine. Like with me: 14.1.1 on a reserved machine[222]. Forget VMs or porting it to new environment. If it works don’t mend it.

Any technology has its time. You could say that I in 2017 was a late comer on xC and xTIMEcomposer. Fair enough. But of course, this explains why the move by XMOS this time is particularly bold. Nice technology. I very much hope it will blend, and that any wound will grow. It’s not the new technology. It’s the feeling of targets on the move.

But given the time I will have any wounds healed, and grasp for the salt shaker that XMOS is placing in front of me.

Even if the X2 xCORE-200 explorer board is becoming obsoleted (therefore I purchased some recently), the X3 XCORE.AI eval kit finally is available. Where else would I find real concurrency on a 16 core machine that is as cold that I don’t think it would melt butter? Both restarts (?) for me and fresh starts for you (?) should be viable.

 XCore Exchange

  1. What’s all this RISC-V stuff, anyhow? started by CousinItt.
    It includes links to presentations, meticulously found by CousinItt:

    1. Dec2022 (?) on EDACafe Bunker Broadcast where Sanjay Gangal interviews Mark Lippett, XMOS CEO on the topic of X4’s RISC-V: https://edacafe.com/video/XMOS-Mark-Lip … media.html. «No benefits from the three earlier generations given away«. «From a functional perspective the architecture will be able to do all the same things«. «We have never claimed binary compatibility, we’ve always asked our customers to recompile their code, that will remain the case as we move from an xcore instruction set to a RISC-V instruction set«. (See my above point 4 comment on Three task models.) «There is no other RISC-V embodyment that has the characteristics and the ability to emulate hardware even down to the single-digit nanoseconds with absolute timing precision and at the same time as providing a platform for application code that is software real-time«.
    2. Mark Lippett’s presentation to the RISC-V conference Dec2022: https://www.youtube.com/watch?app=desktop&v=WBfZK3EWPAs
    3. As requested, here’s Henk Muller’s presentation: https://www.youtube.com/watch?v=q594f_7Irg0

References

  1. XMOS: Using RISC-V to Define SoCs in Software by Max Maxfield in Electronic Engineering Journal (EEJ) (March 2, 2023), see https://www.eejournal.com/article/xmos-using-risc-v-to-define-socs-in-software/
  2. XMOS Joins RISC-V Ecosystem. EE|Times. By Sally Ward-Foxton 05.30.2023. Read at https://www.eetimes.com/xmos-joins-risc-v-ecosystem/

Leave a Reply

Dette nettstedet bruker Akismet for å redusere spam. Lær om hvordan dine kommentar-data prosesseres.