RISC-V is doing well
Back in 2010 some researchers at the University of Berkley started work on an instruction set architecture (ISA) that was going to be both open for anybody to use and incorporating modern ideas. All computers run by performing a series of operations, like loading a 16 bit value from memory, adding two 32 bit numbers together and returning the highest possible 32 bit value if the result can't be represented in 32 bits, or taking the cosine of a 32 bit number. An ISA defines which operations are basic to the computer and which have to be assembled out of other instructions. It also tells you how you represent your instructions as sequences of 1s and 0s in memory. And it specifies various other things such as how memory accesses from different cores interact.
You might have heard of the great RISC versus CISC wars of the 1980s. For a long time it was very expensive to move data from memory to the computer core (it was only every one back then) and back. Transferring instructions is part of that and to save on instruction memory designers wanted to use as few instructions as possible and so made their instructions as powerful as they could. And since programming back then was usually done by programmers writing individual instructions rather than using compilers this also meant less work for the programmer.
But as time went on the amount of data that computers worked on grew faster than the number of instructions they used making code size merely important rather than critical. Programmers started to use compilers more. And some researchers at Berkley and Stanford realized that there were a lot of advantages to using a less complex instruction set. If you simplified your ISA you would have an easier time doing things like starting one instruction before the previous one had finished because there were fewer complicated interactions. Less instructions meant less work for designers. And in the early 80s you could fit an entire RISC core on a single silicon chip rather than having to spread it across multiple chips. That made it cheaper and faster.
A lot has changed since the 80s. Some aspects of the RISC philosophy have fallen by the wayside but others are embraced by everyone designing a new ISA for general purpose computers. And RISC-V is, of course, firmly in the RISC camp.
Other people have created ISA that are open for anybody to use and free of patents but none of them had ever really taken off. I'm not familiar with them so I'm not going to speculate on why. In contrast RISC-V has gotten a lot of people interested. There are a number of concrete processors that adhere to the architecture that have been designed at Berkly and other places and which have also been released for people to use freely which may be part of it.
When I first heard of these efforts a couple of years ago I was impressed. Back when I was doing my thesis I could see how an open chip design could be been useful for me to modify and try out my ideas for my thesis. Now that there were these designs out there free to modify and with working compilers and other software out there lots of academics working on processor design were going to have a very powerful tool. So RISC-V clearly had a bright future in academia.
In the outside world there were certain benefits. RISC-V makes it very easy to add your own new instructions for any special purpose you might have. So companies with special purposes in mind would have a reason to look at it. I wasn't optimistic about a wider impact, though.
Well, it now looks like I was underestimating it. At the seventh RISC-V Workshop yesterday Western Digital announced that they were moving to RISC-V for the microcontrollers in their hard drives which tell the drive head where to go, communicate back to the motherboard, etc. That's potentially billions of RISC-V cores shipped in commercial products every year.
A while ago NVidia also announced that they were looking at RISC-V for microcontrollers orchestrating things in their graphics cards while the GPU cores did the computational heavy lifting. They mentioned that the ability to add their own extra instructions was a big draw.
So that's some success in embedded microcontrollers. That makes sense for people who want more customization or who don't want to pay licensing fees to, say, ARM. A few days ago I certainly hadn't been expecting people to be seriously considering RISC-V for application cores running all sorts of different programs such as in a phone or laptop. If you're receiving applications from third parties they can't make use of any special extra instructions you have so the RISC-V flexibility isn't a factor. And nobody has created applications for RISC-V, though you can always compile existing code for it if you have access to the source.
Well, I still think that but another of the talks at the Workshop was for a fairly hefty 4 core chip that would do pretty well inside a laptop. I'm not sure anyone is going to put it there but I'm sure people will be using it for servers, where you're running a narrower selection of software. There's support for RISC-V being added to Linux though it isn't fully supported yet.
The whole thing is moving faster outside of academia than I would have expected and I'm interested in seeing what the future brings.
You might have heard of the great RISC versus CISC wars of the 1980s. For a long time it was very expensive to move data from memory to the computer core (it was only every one back then) and back. Transferring instructions is part of that and to save on instruction memory designers wanted to use as few instructions as possible and so made their instructions as powerful as they could. And since programming back then was usually done by programmers writing individual instructions rather than using compilers this also meant less work for the programmer.
But as time went on the amount of data that computers worked on grew faster than the number of instructions they used making code size merely important rather than critical. Programmers started to use compilers more. And some researchers at Berkley and Stanford realized that there were a lot of advantages to using a less complex instruction set. If you simplified your ISA you would have an easier time doing things like starting one instruction before the previous one had finished because there were fewer complicated interactions. Less instructions meant less work for designers. And in the early 80s you could fit an entire RISC core on a single silicon chip rather than having to spread it across multiple chips. That made it cheaper and faster.
A lot has changed since the 80s. Some aspects of the RISC philosophy have fallen by the wayside but others are embraced by everyone designing a new ISA for general purpose computers. And RISC-V is, of course, firmly in the RISC camp.
Other people have created ISA that are open for anybody to use and free of patents but none of them had ever really taken off. I'm not familiar with them so I'm not going to speculate on why. In contrast RISC-V has gotten a lot of people interested. There are a number of concrete processors that adhere to the architecture that have been designed at Berkly and other places and which have also been released for people to use freely which may be part of it.
When I first heard of these efforts a couple of years ago I was impressed. Back when I was doing my thesis I could see how an open chip design could be been useful for me to modify and try out my ideas for my thesis. Now that there were these designs out there free to modify and with working compilers and other software out there lots of academics working on processor design were going to have a very powerful tool. So RISC-V clearly had a bright future in academia.
In the outside world there were certain benefits. RISC-V makes it very easy to add your own new instructions for any special purpose you might have. So companies with special purposes in mind would have a reason to look at it. I wasn't optimistic about a wider impact, though.
Well, it now looks like I was underestimating it. At the seventh RISC-V Workshop yesterday Western Digital announced that they were moving to RISC-V for the microcontrollers in their hard drives which tell the drive head where to go, communicate back to the motherboard, etc. That's potentially billions of RISC-V cores shipped in commercial products every year.
A while ago NVidia also announced that they were looking at RISC-V for microcontrollers orchestrating things in their graphics cards while the GPU cores did the computational heavy lifting. They mentioned that the ability to add their own extra instructions was a big draw.
So that's some success in embedded microcontrollers. That makes sense for people who want more customization or who don't want to pay licensing fees to, say, ARM. A few days ago I certainly hadn't been expecting people to be seriously considering RISC-V for application cores running all sorts of different programs such as in a phone or laptop. If you're receiving applications from third parties they can't make use of any special extra instructions you have so the RISC-V flexibility isn't a factor. And nobody has created applications for RISC-V, though you can always compile existing code for it if you have access to the source.
Well, I still think that but another of the talks at the Workshop was for a fairly hefty 4 core chip that would do pretty well inside a laptop. I'm not sure anyone is going to put it there but I'm sure people will be using it for servers, where you're running a narrower selection of software. There's support for RISC-V being added to Linux though it isn't fully supported yet.
The whole thing is moving faster outside of academia than I would have expected and I'm interested in seeing what the future brings.
Comments
Post a Comment