[For readability, moderator comments have been removed, as well as minor questions for better understanding.]
Well, good afternoon everyone, so I'm Jean-Paul Chaput, and I will present you today Coriolis, which is an RTL-to-GDSII tool chain that was developed at Sorbonne University.
So here is a very simplified view of the design flow, and on the left you have the hardware description language, whatever they are, then we go through logical synthesis, and then to the physical synthesis, and then we got the layout. To be completely accurate, Coriolis is the blue part, this is the part that we specifically developed at Sorbonne University, but we have also developed a framework to manage all the tools of the chain. But the red ones are third-party, so developed outside of our lab, but we use them. When we developed Coriolis, we wanted to do something different from what was done at that time, which is between 15 and 20 years ago. We wanted to make integrated tools. That was the key point of making Coriolis. We did not want to only share the database, the underlying database, and having tools run one after the other, for example run first the placement, then the global routing, and finally the detailed routing. What we wanted is that all the tools reside in memory at the same time, and part of the tool can be run, the tools are run sequentially, but with part of them not in order. For example, typically we use that to tightly integrate the global routing and the detailed routing, and some part of the detailed routing are run before the global routing. The fact that everything resides in memory allows us full communication between the tools. We don't need files, we don't need extra things. We can really bind the tools together very tightly, and this is a key point of Coriolis. So you have the database on the left, the end result of that is, finally, we wrap everything in Python. The whole tool, the whole set of tools, is completely scriptable in Python. I mean by that, that in the end, we don't have a binary. There is no one binary with Coriolis, we have just a set of libraries, which is then bound by Python. We write the computationally intensive part of the tools in C++, put them in the library, and then we assemble the library the way we want. And we can, this is for experimental needs, so we can rearrange the tools, we can do a lot of experiments with Python, which is much faster than doing it directly with C++ code. This is doing for, we do that for fast prototyping. So the end result is a mix of C++, highly efficient tools, and Python glue, and even maybe Python algorithms sometimes. So part of it is written in Python, part of it is written in C++, in an almost seamless way. We can really efficiently communicate between Python and C++. So this did allow us to completely integrate analog design. There is one feature which is not represented here, but we will see an example at the end of the presentation. We can completely mix analog and digital design. There is no longer, if for the designer in the room, there is no longer analog-on-top or digital-on-top approach. It's seamless. This is still a demonstrator, but we have all the capabilities to do it, and we will achieve it sometime soon. So what the current capabilities of Coriolis are. As making ASICs is very difficult, I think every people who try to make a GDSII, who passes all the verification, is aware of that situation. It is very difficult. So we started to target the mature nodes, that is, above 130 nanometers, typically Skywater. And we target mature nodes, and we will slowly go down to more advanced nodes as we add feature after feature, mainly the timing closure. So our future strategy, how do we manage to do that? As I said at the start of the presentation, we have completely integrated tools, but they still run basically sequentially. Even if some part of them can be run out of order, we are basically sequential. What we want to do to reach advanced nodes is to manage timing closure, and for that we want to go to another level of integration between the tools. That is, instead of having the tool run sequentially, we want them to run step by step, through a progressive refinement process. The idea is basically to make one step of placement, then perform an analysis of the timing, extract some constraints, some information, that will guide the next step of the placement. And that involves global routing and placement, and we will go down progressively until all the objectives are met. And this is our next big step. So this is the challenge.
So here is the first example of design that we do. This is the first stage of an OpAmp, (I don't know the exact name). So the point is that on the lower part, you see the analog design. It's not very compact, because that was not the point here. This is a test example, and it is mixed with a decoder on top. The part on top, we see clearly it's a different layout. It's the digital part, it's made of standard cells, and it performs a decoding task, which controls the little devices, which are exactly below. You see the set of horizontal lines, and it's completely integrated. The router, for example, the global routing and the detailed routing are done with the same structure for both parts, analog and digital. And the analog part can manage, the detailed and global router, can manage specific constraints of the analog part. So next, what we did very recently is this small test chip with the PragmatIC technology. And it's only a very small one, 760 standard cells. It's a small thermometer, and it was made by PragmatIC, with their flexible technology. It's a four-metal layer technology, of which two are available for routing. So it's kind of a bit tough for the router, because it has only two metal layers for routing, and it was still able to complete in over-the-cell routing. And the chip was made, and it did work the first time, with a yield of around 70%. So it was very interesting.
The next one is the biggest still that we did with Coriolis. So it's 1.3 million transistors. It's an implementation of LibreSoC chip. It is an OpenPOWER architecture. So it's quite different from RISC-V, still. And we were partially able to test it, due to some difficulties. We were only able to check the PLL, but that says a lot for us, because it means that the I/O pads work. It means that the standard cell works, especially the D-flip-flop, because the PLL contains D-flip-flops. And the PLL did work, and generates clock at the expected speed. But due to some problems out of our control, we weren't able to fully test the chip.
And finally, we also made a little RISC-V through ChipFlow, which was sent to the MPW4 program from Skywater. For that, we don't know if it works or not, because we are still waiting for the chip. So we won't be able to test it. The point of Coriolis, just a little bit back, is that it's so much integrated with Python that you can describe your whole design with just one Python script, Even the Makefile-like dependencies, and the fact that you want to run Yosys or that tool, inside one Python script, and not a very long one. In fact, in those kind of scripts, the longest part is the description of where the I/O pads are. If you have 200 I/O pads, then you need to have 200 lines, each one per I/O pads. And the rest is just calling the tools. It's fully customizable, you can do whatever you like. One other point is that with the Coriolis project, not only do we want to provide tools, but we also want to provide blocks, and especially portable blocks. It has always been a big problem that when you change a technological node, most of the time you have to do a lot of work re-doing or re-validating your standard cells, and it is even worse for analog blocks. So what we are also developing is portable analog blocks, and this will be another outcome of the project. So I think I have done it, maybe a little too short, so now I'm waiting for questions.
Q&A
-
There are other tools like OpenROAD, why did you then go for Coriolis?
First, in fact, in terms of chronology, we were the first. The Coriolis project started around the year 2000. But as we are a little team, we have to progress slowly and methodically through the tools. So this is why... And we have... The problem is about the database. One of the problems is about the database. We developed a very specific database, exactly tailored to suit our needs. It is difficult to, after a long time of developing over it, to switch to another one. And I would say it's not very beneficial. If you change your database, basically you do exactly what you have done before, but on another database. So unless there is a very big incentive to do it, I mean, you gain something at the end. If you just switch the database, at least for now it has no interest. But in fact, as I said, we started more than 15 or 20 years ago, depending on the starting point. But we are thinking about changing our database. It may occur in the future, but it will be a slow move, and one which is very well planned.
[Comment from the audience:] One comment is that we all want to use open-access. But open-access is not open. So we use the Athena Design Systems database. It was donated to the project. By a team from IEH, we had three companies, IT donated to the project, and that was where we started. So it's important to have open access, equivalent to saying, "Come to your house, do not worry about the key". And I think the world wants something like that.
-
Can you share your timeline for the whole mixed-signal flow?
I don't have a timeline yet. I know exactly what we have to do. We are in the process of hiring people, and it will depend on if we succeed or not. I cannot give you a definite timetable, because I don't know yet. It's too difficult to see now.
-
...I'd like to see it by now.
Yes, I would say it's almost working. So it depends on the incentive. We can re-focus our priority if there is a demand. Up until now, it was not our top priority, because we have other requests. But that can change.
-
I have a question, a more technical question. You said that when you have certain routines, and you are calling people at a time, is that a possibility?
Sorry, I don't hear you well.
-
There is a set of lines, and then you are calling the things that you need, etc. So, and you said that mostly you store it in memory. So, how do you, let's say, comment this kind of deviation of the queries? Because, you know, computers so far, they have limited memory. And so, at certain moments, when the complexity of the chip limit goes down, how do you predict that?
I think for now, we rely on the memory. I mean, we can only manage chips that fit into the memory of the system. But until now, we have not reached that limit. I mean, we did not make a very huge chip. The biggest one we made is the TSMC one, and I think it fitted in less than 10 gigabytes. So, we are quite compact in memory.