The 96Boards Consumer Edition (CE) Specification v1.0 is now available


The first version of the 96Boards Consumer Edition (CE) specification is available on this website in PDF format here. Please post your comments and questions on this forum.

The first version of the 96Boards Consumer Edition (CE) specification is available on this website in PDF format here. Please post your comments and questions on this forum.

hi steve,

i’m the author of the EOMA standards, which have been developed openly and publicly over the past few years. the EOMA standards are designed as long-term consumer-friendly simple standards where the primary motivation is that the sales pitch for initial sales as well as future upgrades is “Just plug it in: it will work”.

i would therefore like to offer some insights into the 96boards “consumer” standard. i have to warn you, and i apologise deeply in advance for this, that it’s not a favourable assessment based on the intended market deployment (“consumers”). my advice here is that you should have announced this a long time ago, opened up the forum and the standards process to consultation, waited for at least one year, then announced preliminary boards on a draft standard, then waited another year to see if it was found to be successful, and then and only then finalised the standard.

so, the first point: both the web site and the standard (which took a long time and over 5 clicks to find) are completely unclear as to what the standards are for. i assume that because the word “consumer” is included in the first standard, that it is targetted at the average end-user, however i have had to infer that (logically deduce it). i assume therefore that the intent is that the average end-users to whom products are targetted will want to upgrade at a later date, keeping the casework and peripherals just like some (very advanced) PC users do at the moment. i assume that the intent was to make it easier to follow this upgrade path, although that is not entirely clear and is speculation.

so that’s the first thing: the actual purpose of this standard is completely unclear!

moving on from there: the key to a successful standard is that there should be absolutely no possibility for confusion in the eyes of the end-user. High-end Gaming PCs for example take months to plan and build. the average PC is slightly better as long as you steer clear of internal PCIe upgrades (USB2 and now USB3 peripherals, which, thanks to the USB standard, “just work”). even PCIe cards, as long as you have a full tower, you can pretty much just buy whatever and you can expect it to “just work”, again, thanks to the auto-negotiation at the hardware level.

this gives a clue as to what is really really important about developing end-user-targetted standards: there must be NO optional interfaces.

so the first indication that there’s been a design mistake is on page 5, “Optional MIPI CSI-2”. this means that anyone wishing to purchase a 96boards-compliant host device - especially if they may be expecting in the future to upgrade - must carefully review both the initially selected hardware. but, the problem is that even in that initial selection there will be fear in their minds, “what if i cannot find an upgraded board in the future which has the EXACT same options as what i am intending to buy right now?”

the second source of confusion is on the SoC Location Options. assuming that end-users will be considering upgrading (possibly on a financial budget) or just simply being socially and ecologically responsible by re-using parts that they already own, it is reasonable to expect that they will want to keep the casework when upgrading. so it’s not that there are two possible power dissipation options within the standard, it’s that catering for both has not been made mandatory that is the key source of confusion.

page 6: DRAM and eMMC being optional on the board, that’s absolutely fine. specifying minimum RAM and boot flash size requirements, that’s a great idea. microSDHC: i don;t fully get the rationale here for mandating microSD. also, it’s not been made clear as to what the minimum compliant standard is (3.0 or 4.0). this is especially important given that the 4.0 standard [stupidly] removes SPI compliance.

page 7: WIFI / bluetooth. this just drives up cost. i don’t understand the rationale for increasing cost.

page 7: Display Interface. i don’t understand the rationale for mandating the interface type. it guarantees exclusion of lower-cost SoCs (Allwinner A13, A10s, quad-core A33, and many others that provide no MIPI, DisplayPort or HDMI functionality), and also results in case-work confusion over the choice of Type A, Type D, MHL or DisplayPort connectors. it would have been far better to leave all of this up to the OEM designers, specifying a base case standard and allowing them to create matching 0.1mm stainless steel port-plates just like in ATX PCs today.

again: on the audio interface, this once again excludes certain SoCs. audio is very very tricky: there is such a massive amount of variation. this indicates that it should have been left up to the OEMs to decide how to provide audio, with the standard specifying that it must BE provided but leaving it up to them as to how it should be done (and to provide a matching port-plate).

MIPI 1-4 lane selection, with implementations being allowed to use less lanes: GOOD. this is the CORRECT thing to do.

regarding the sentence “Note that if a single DSI interface…” - i genuinely cannot make heads nor tails of it. i don’t understand it at all. but the red flag is “optional as to whether the on-board interface is useable at the same time”. this is an absolute no-no.

page 7: Camera Interfaces. whilst allowing selection of between 1-4 (or 1-2) lanes at the hardware level is perfect, the optional provision of CSI1 most definitely is not. the scenario which gives the “red flag” is the one where an end-user purchases an initial unit with 2 CSI interfaces, then later purchases an upgrade unintentionally with only one. in the strictest case they will just have entirely wasted their money because there is actually nothing wrong (under warranty) with the new unit, but in the worst case scenario there will be considerable product returns. as you know, in the consumer market, ANY returns is extremely bad, and above a certain threshold the entire product line - and the standard itself - will be killed off by every single retailer.

destroying the reputation of the standard, due to angry end-user report over time, is perhaps the absolute worst thing that could possibly happen… and unfortunately, based on the standard itself, we can pretty much guarantee that that is what is going to happen.

“GPIO signals” - this is the very first mention of GPIO in the entire document, and it’s done as an after-thought! it’s only because i have developed standards myself, and i am familiar with dozens of ARM and other SoCs, that i know what is being referred to, here (multiplexing). that it is not clearly spelled out will be a severe source of confusion to OEMs.

page 7: USB ports. “two Type A OR Type C ports (USB 2 or 3) shall be provided” - again, massive source of confusion for end-users when it comes to choosing an upgrade. casework may not be saved when upgrading.

regarding USB-OTG: i also thought about whether it would be wise to include USB-OTG in EOMA standards, and in the end only decided to include it (non-optionally) in one standard which was “dongle-like” (similar to these HDMI-USB dongles), on the basis that the power requirements of such small devices was well within the 500mA power provision of USB-OTG.

at no point did it ever occur to me to allow USB-OTG to be optional within any of the EOMA standards-compliant connectors. i did however mention that it would be fine to have USB-OTG as an optional interface via the general user-facing fascia plate, where the OEMs were permitted to place any interfaces of their choosing out through that area. there are several standards that follow this guideline, and it is good practice.

so, again, there is a source of confusion for end-users in both selection of their initial purchase as well as for future upgrades. and unfortunately the confusion multiplies with the number of permutations from all the various options.

page 8: audio. mandating audio via bluetooth, again, i do not see a description of the rationale behind this as a mandatory choice (nor why bluetooth itself is a mandatory requirement). I2S/PCM - which is it? is it I2S or is it PCM? there is no link to the I2S standard on page 15 where References are given, so it is unclear. also, I2S is quite a high-end standard, which immediately excludes certain SoCs. additionally, which variant of I2S is permitted: the 5-pin variant or the 8-pin variant? all in all - and i know this because i have had to think about it for some considerable time - the audio section of this standard leaves a lot to be desired. it would be far better to leave it up to OEMs, merely stating the minimum requirements (stereo, 16-bit, minimum of 32khz, mono mic), just like it is done with ATX PCs for at least a decade and a half, now, with the 0.1mm steel fascia plate.

page 8: DC Power. this, as i have discovered, is one of the hardest things to get right. the EOMA68 standard - quite by accident - is actually almost USB-OTG compliant (except being a maximum of 5.0 watts). it is then possible to use SoCs such as the Allwinner AXP209 and a SY6208, or for higher power provision for portable devices i have recently discovered the LTC4155 to be a perfect match as it can supply up to 15 watts, has automatic dual-supply selection between 5.0V DCIN and USB-OTG, and has boost power for USB-OTG from the on-board battery and so on. to my eye, the power provision section of page 8 and 9 looks… complicated, unclear, and expands the source of confusion for end-users into the realm of the power sources that they will have to select and then replace later when upgrading.

but the real serious problem here with this standard is in the sentence “Limitations on available power shall be clearly documented”. this is a standard. you’re supposed to tell us what is required!

page 9: power measurement. excellent idea!

page 10: power button being external or via a pin connector: great idea. implementation note: yeah, i too had the same thought :slight_smile: for example on one board i have designed the power button is connected directly to the AXP209, whereas on another it is wired directly to the SoC’s “reset” line. the only thing not made clear is where the switch actually resides.

page 10: external fan connection. +5V or +12V - bad idea. it should be one or the other, not both.

page 11: UART: second UART, again: confusion. there should either be one, or two, but not one OR two interfaces. also you need to be much clearer as to the levels.

page 11: JTAG. making a particular JTAG interface mandatory will automatically exclude certain SoCs. this is generally inadviseable.

page 11: expansion connectors. this is only the second mention of GPIO, which should have been included right at the start in the summary (mentioned as multiplexing).

page 12: here is where we see the extent of the possible confusion for end-users as well as for OEM implementations. there are no less than FOUR mentions of the word “optional”, and we also see something that has not been specified before (I2C interfaces) but is only mentioned in passing. it is important to have a section on I2C because there are minimum speed requirements for I2C that need to be clearly spelled out.

only now, on the “expansion notes on connector” is multiplexing mentioned. GPIO needs to have its own special section because there are power requirements, speed requirements, TTL level requirements and much more. did you remember to specify the voltage levels of the GPIOs? is in and out okay? can the GPIO be tri-state? what is the minimum resistance during tri-state? none of this has been covered in prior sections but the standard has moved on to the connectors already without mention of the GPIO in its own explicit section.

general notes: the choice specifically of 1.8V for GPIO is a poor one. it guarantees that any SoC that does not provide exactly 1.8V GPIO must now include external bi-direction level shifters. and given that the GPIO ports are multiplexed, mezzanine boards are now made both more costly as well as much more complex.

it would have been far better to provide a reference voltage (VREFTTL) and to specify that when doing GPIO the VREFTTL rail must be used as the source for pulling HIGH/LOW. the VREFTTL voltage may then be put into one side of a level shifter (if it is required at all). quite often it is sufficient to choose a GPIO expander IC, and to power the GPIO expander IC from the VREFTTL rail.

so you have specified one pin to be the 1.8V “power reference” line - this pin is already allocated. it should have been simply “VREFTTL”, with the possibility of VREFTTL being between 1.8V and 3.3V.

in this way you would have allowed far more low-cost SoCs to be selected, without the mandatory addition of level-shifter ICs that will, without a shadow of doubt, increase the BOM for any SoC that is not exactly 1.8V. given that some of the SoCs on the market can be as little as $2 in 10k volumes, the addition even of a single $1 16-pin level shifter IC - or worse a level-shifter IC with two-way port selection (in order to cater for the optional interfaces) makes the deployment of such SoCs completely pointless!

overall then, it saddens me to have to conclude that this standard is simply not well thought-out, from several perspectives. the above is by no means exhaustive, but it is clear that the standard is a product of how it was developed.

if you believe that these are valid points that should be addressed, if you would like some assistance to develop future standards, i will be available to assist as long as you make it clear how i may financially benefit from doing so, both short and long-term.


Some first impressions: overall this looks like a step forward from the splintering into not-quite-compatible formats like the different Raspberry Pis, Banana Pi, Orange Pi etc. What stood out to me as missing is some kind of identification system for mezzanines, to allow plug-and-play or at least easy verification that your software is connected to the hardware it expects, but perhaps that could be done over the USB part of the high-speed connector. Perhaps some standard / convention could be developed in the community, on these forums.

The minimum spec seems quite high (in particular requiring WiFi and Bluetooth) so it would be nice to see manufacturers providing cheaper boards which don’t meet the full spec but have the same form factor and connector pinouts, to expand the use into more markets. (I’m thinking of something around the RPi level.)



john, that’s quite a good summary, i had forgotten to emphasise the fragmentation as well as the high barrier to entry. fragmentation is a serious problem that is basically inherent in the standard itself, making it… well… not really a standard at all. also, yes, identification of “carrier boards” is part of the EOMA68 specification, via an I2C EEPROM at a specific address. identification is also entirely missing from the 96board specification, i hadn’t spotted that.

there are a couple of problems with the suggestions that you make, john - these are just observations, ok?

the first is: using USB as identification forces even the lowest-cost mezzanine boards to carry a USB-compatible chip, purely for those purposes. I2C EEPROMs on the other hand are pretty low-cost.

the second problem with suggesting that the community create a convention is that it isn’t part of this standard, causing yet more fragmentation.

and that brings us to the third problem which is that this standard should, really, never have been declared as “1.0”. “1.0” indicates “stable release”. in other words, the systemic inherent flaws built-in to this standard are here to stay. a declaration “1.0 release” says “it’s too late to change”.

it took three years and several painful decisions to abandon costly prototype hardware due to a little bit more thought on the EOMA68 standard coming up with the conclusion that if it had been released too early, it would fail. by making the painful decision to keep the standard in draft, removing inappropriate interfaces and replacing them with better ones, the lifetime of the standard is extended and the types of SoCs that can fit into it has been expanded.

these things take time, and you have to be prepared to go through them properly. but a with 1.0 release declaration, linaro has unfortunately told us that the development process of this standard is over, that our input is not and was not required, that there was no intention to consult anyone outside of a small group of people involved in it. this is not a criticism: this is a simple and straightforward logical deduction that may be seen from the actions that the creators of this standard have taken. we will see, over time, how far that process gets them.


Luke, I’m fairly new to embedded stuff, and didn’t know about I2C eeproms — agreed, they sound like a much better idea than USB (actually I steered away from looking for such a thing, because I knew the number of addresses on an I2C bus is smaller than that on a USB bus). It’s a pity that anything like that would have to be layered on top, but in terms of the numbering, remember that USB effectively started at 1.1!

I think that building device identification into the standard, so that the kernel could put appropriate entries into /dev, would be a big step forward for safety-critical use, as it means that the application would only be able to run with the right hardware (or, seeing it the other way round, only the right applications could run with the hardware you have). To contrive an extreme example, if your applications find their hardware as /dev/quadcopter0/motor* and /dev/reactor0/controlrod*, you won’t get the same sort of problems with running the wrong software that you could if they simply address the hardware by numbered GPIO pins. OK, there’d be many more lines of defence against mixups like that; this is just arguing “ab absurdum” to make it clear.


It does say, on " To comment on the specifications…": and “If you wish to be involved in defining future versions…”

I’m not so bothered about the second UART being optional (but then, I’m looking at this as standardizing a common part of breakout boards for SOCs); for SOCs that provide only one UART, it’s a pity to have to add the extra functionality on another chip; for those that provide two, it’s a pity to waste the second one.

I’ll jump in with a specific proposal about the identification ROM: that to avoid having to have a central registry of vendor numbers, and a binary format that later turns out to be either too simple or too complicated, we make it a series of null-terminated KEY=value strings (just like a Linux Environment) terminating the series with another consecutive null byte (and let’s say it’s UTF-8 text); and that certain keys must be given, perhaps VENDOR, PRODUCT, and VERSION.


jcgs: indeed it does… however as i am a software libre developer who works in the open i prefer public discussions not private ones, for many many reasons, not least is that they keep you honest, provide other people with an insight into the development process, teach others what mistakes to avoid, and, importantly for me, allow others to tell me when i’ve made mistakes. to find out why i really dislike private discussions on complex topics, invert every single one of the benefits listed in the previous sentence.

for EOMA68 the key primary reason for using a I2C EEPROM is to store fragments of linux device-tree files. i think you’ll find that the registry concept you propose, if you think about it, may be expressed as linux device-tree files, with the advantage being that the source code to parse device-tree files already exists.

… but all of this discussion is, unfortunately, far too late, and of no value as far as this particular standard is concerned. linaro has already declared this standard to be absolute and inviolate [without wider consultation]. a 1.0 release is a declaration that it is final.


ok so i have some more comments on this standard.

the first is: every GPIO pin is required to be an IRQ pin. this automatically excludes a large number of SoCs because many SoCs typically have an extremely limited number of IRQ-capable GPIOs. that forces anyone wishing to be compliant with this specification using such SoCs with limited IRQ pins to add in a very large two-port bidirectional tri-state multiplexer IC, and a high-speed one at that. on one port of the multiplexer will be the IRQ-capable GPIO, and on the other port will be the various interfaces (SD/MMC, UART and so on).

and that adds in cost. extra cost pretty much automatically means “uncompetitive”. example: given a selection of SoCs, if one SoC is $4 but adding that SoC requires the addition of a $2 multiplexer IC in order to comply with the standard, why would you bother with it? you just go for the $5 SoC from a totally different supplier instead.

also, i finally managed to infer (after reading much further down) what the PCM / I2S stuff was about. it is expected that there will be three functions for those four pins: GPIO (IRQ-capable GPIO), PCM capability, and 4-wire I2S.

generally it is an extremely bad idea to expect all SoCs to be capable of multiplexing to three sets of functionality at once [see cost of adding two-port multiplexing ICs above]. one might say that in this instance it would be reasonable to expect PCM capability to be implemented via bit-banging, however that could result in significantly higher CPU load than end-users would be comfortable with.

anyway - i believe there may be more however this is taking up a significant amount of my time to review.


guys… i’m puzzled that a significant amount of feedback has been provided but i have not received a response or an acknowledgement from anyone at 96boards, nor have i received a response in email about working with 96boards on future standards.


Thanks for all the comments on the specification. We are collecting these and comments from elsewhere and will be posting more explanation of our thinking and the discussions that have been had over the current specification in coming weeks. We are in the process of forming the Linaro Community Boards Group (LCG) and will announce in the near future how the group will be directed and the specification will be evolved.


We’ve posted further explanation of Linaro’s thinking behind the specification in a blog on the Linaro website:


hi steve,

hope you don’t mind me being absolutely honest, but the explanation sounds like excuses, and also in no way acknowledges the barriers that have been raised.

firstly: you cannot evolve a standard that’s been released formally as a 1.0 specification. once that’s done, the only way that a standard may be modified is if you are prepared to be treacherous towards all and any former people who have committed to it, leave them high and dry, as well as cause massive confusion for anyone wishing to commit to the future (modified) standard. not least, they know that you’ve screwed things over for one group, so chances are you’re likely to do it again.

in short, if you’re thinking of “evolving” this standard, DON’T. if it dies out because its flaws become self-evident over time (which you should have anticipated), let it happen, and learn from that.

in regard to what the CEO said:

“for example a “maker” board with 3.3V I/O and Arduino® or mbed® compatibility”.

well, that’s going to be more expensive than standard “maker” boards, because it requires I/O level-converters from the fixed voltage of the 96boards standard to the 3.3v I/O standard that arduino follows.

then, in future, when someone creates a SoC that uses 1.5v TTL or even 1.2v TTL (so as to be lower power), there will have to be yet more level converters on-board the (incredibly small form-factor) of the 96board PCB.

if that’s even at all possible, especially if you have a huge number of I/Os to convert - remember that they have to be both 2-way and tri-state… and multiplexed!! because some of the GPIOs are shared with other functions.

basically you made it massively complex to create future-compatible boards, which, in practice, means that the standard itself is an outright failure. i anticipate you will get maybe about… three customers creating boards for it within the next… year or so (basically, the time set by the move to the next lowest most commonly-used geometry)

hard questions:

(1) why didn’t anyone raise this for discussion before announcing the specification as a final release?

(2) where was the announcement to the general public that the specification was even being developed. the absolute first time i even ever heard of 96boards was when the first board came out, and i make it my business to follow the embedded space.

(3) i still have not received a response to any of my communications by email to 96boards. how can you expect people to contribute if you’re not answering when people offer to help?

bottom line is: you rushed this standard through, with far from sufficient thought, and it’s going to bite you. standards take a long time to develop, as you need to go over every single detail. thoroughly, thinking through the practical consequences over the entire projected lifespan of the standard. get one thing wrong and that’s it, the standard’s a waste of time even before it’s out the door.

so… guys… the summary: your customers pay you a heck of a lot of money each year: i expect they expect you to find the best people, and based on the evidence so far - this first standard - that you did not do that is very clear. so if you’d like to pay me to help you do a better job in future, you know where to reach me.


guys, i went to quite a bit of effort (at monetarily zero charge) to write advice for you about the 96board standard, but i have not received an acknowledgement, even in the form of a “thank you”, nor a response to the offers to assist with the development of future specifications that i made to the official email address.

could i possibly ask why?


it has been nearly three years, now, since the announcement of the 96boards standard. it was quite by chance that i encountered this post. could someone from linaro please contact the CEO to ask for a public explanation?

[edit] in particular, the CEO may wish to include an apology to the customers of Linaro whose payments fund the existence of the entire Linaro company. the apology - which is not to me but is to those extremely high-profile clients of Linaro - may also wish to explain why it is acceptable that people attending Linaro Connect are making jokes about an open standard at its creator’s expense. the CEO may wish in particular wish to dispel any implication that Linaro is associated with these individuals who wish to emphasise and imply that by making another standard “look bad” - publicly - this in turn implies that the 96boards standards must be “better”. such behaviour is completely unethical, and i trust that the CEO recognises it as such and will take appropriate action to undo the harm caused to the reputation of those people affected by this thoughtless action.


OK, maybe I’m just new at this but -

  • Is there any place that specifies How Low the “low speed interface” is limited to?
  • Is there any place that specifies How High the “high speed interface” is limited to?


Each interface contains a bunch of different protocols. Maximum and minimum speed are different for different pins and come from the underlying protocols. Hence high and low are mainly clues for a board designer to let them know which interface they must pay special attention to in a design.

More generally tinkers and hackers can expect hook up wire to work with the low speed connector because the signals are slow and, in most cases, strong enough that the capacitance of the wire don’t matter too much.

That is not true for the high speed connector. Here most of the signals (except the I2C controls lines for the camera) are faster and more delicate.


Thanks for the feedback. But I think I’ll take the data I found on Reddit, at:

“Low speed” bus connects to a HD pin bank, with an absolute maximum performance of 250Mb/s
“High speed” bus connects to HP pin bank, maximum performance of 1250 Mb/s.

So if I want to make an add-on board for a 12-bit 120M ADC [SDR application] there’s a chance I could use the “low speed” connector. Limiting factor isn’t the chip, but the connector, potential stray capacitance, impedance mismatches, etc.


@Alan_Campbell: I think you are posting on the wrong topic! My answer describes the requirements of the 96Boards specification because that is the topic you are posting under. The 96Boards spec does not dictate how fast a board must be able to perform GPIO on the LS connector and for other pins (including the entire of the HS conector) the maximum speed is dictated by the protocol assigned to the pin in the standard.

If you need to know about the limits of an individual board it is better to post in the boards own category (e.g. ). All the 96Boards have differentiating factors, often including multi-function pins on the high and low speed connectors, but these fall outside the scope of the specification.