In general terms, optical systems in commercial applications usually refer to data transmission; everything from phone calls to music to web pages to the image of your last chest x-ray.
Optical isn't our only choice of media to carry the information; we could do it electrically with wires, or wirelessly with RF signals, too. But those each have their own issues and limitations when it comes to moving ever larger and larger quantities of data around.
So if you want to get a lot of information from here to there, optical is king.
The most common, and easiest to understand application right now is fiber optics. Light goes in one end of a glass fibre, travels some distance away, it comes out the other end. If you turn the light on and off very rapidly, you can transmit digital data in binary, as a series of 1's (the light is on) and 0's (the light is off). Pretty simple stuff. In real life it's a bit more complicated than that, and involves lasers (sadly, no sharks!), but for illustration that'll do for now.
Optical cables don't suffer from interference like copper wires or wireless signals do; the light is trapped in the fibre, and no new light can get in. Over long distances, or at higher signalling speeds, copper and wireless can't compare. If you're over the age of 40 you might remember a time when long distance phone calls didn't sound very loud or clear... today it's hard to tell how far away you're calling.
You can thank the incredible amount of fibre used today... here is a map of undersea cable for your amusement; a map of the land cable wouldn't be all that useful because fibre is everywhere now. A lot of new housing developments in North America have fibre pre-run to each house now, so one day we'll actually get to the holy grail of the entertainment industry.
So where do all those web pages and movies and music come from? Datacenters; basically big buildings, or floors of buildings, full of all the computer goodness needed to store, process, and serve up data on demand. And they are everywhere; every company has all the components today, it's just a matter of scale. From the laptop-toting consultant, to the biggest names in computing, the market for improvements is everywhere.
So what's to improve?
You turn on your computer and your iTunes works. You do a Google search, and the results come back as you type in your query. Life seems pretty ducky.
But it's not.
For starters, the spend on electricity requirements are awesome, and growing. Even if you don't increase the footprint of your datacenter or more modest server room in your business, you'll want the servers there to do more and more work; more files to store, more transactions to process, and more queries to mine the data you already have.
And all that electricity isn't converted into digital signals; in fact most of it is used in building operations, not by compute power. Said another way, most of the non-compute power is used to remove heat generated by the computers. If you're getting half directed to doing computational work, you're doing ok.
It gets worse... with common systems today, a large chunk of the electricity directed to doing computations isn't actually doing anyone any good - it's wasted as the signals are converted from one form to another.
Getting data off a hard drive in a storage array? You're converting the signal from the disk platter into light, so you can place it on your fibrechannel storage network, send it to a storage switch, which then can send it to the server for use. But wait!! That storage switch isn't optical... the guts are electronic. So you have to convert the signal back to an electrical one, figure out where it should go, and send it out the right optical interface.
The server is going to do the same thing; it needs to translate that information that's passing in and out of it's optical fibrechannel interface as electrical, so it can get it into the CPU for processing.
And since these servers (millions of them around the world) are only reachable by either the public internet or a private networks, you'll need a data network interface, too. Chances are that the data network has optical components in it just like the storage network, in fact current switches are storage/data agnostic, referred to as 'Multilayer Datacenter Switches', so they do both jobs.
And don't forget that servers aren't just talking to users, they are talking to each other. The web server has to talk to the application server, which has to talk to the database server, and all of this has to be backed-up as some point, either off-peak, or constantly. Putting hundreds or thousands of servers together in the same place, but uniquely reachable, creates some pretty large aggregate bandwidth challenges.
How big?
A modestly big server will use a 10Gbps (that's Gigabits per second) ethernet link, and it probably has (or should have) two links for redundancy.
A rack full of blade chassis style servers can drive 20Gbps (so again, 40Gbps are connected for redundancy).
A rack switch might have 20-40Gbps to service many lower speed servers at 1Gbps each.
Some of these connections can be copper based today, but there are distance limits. The bonding of multiple 10 gig links into port channels of 20 or 40 or more is an electrical process, so a conversion is done. Again.
And all of this data has to be switched, routed, load-balanced, optimized, de-duplicated, firewalled, and put back on a wire (or optical link) somewhere. An enterprise-class core router/switch today has a backplane capable of passing at least 720 Gbps of traffic (that doesn't count front side port capacity, which is much higher). Oh, and don't forget you need two of everything, for redundancy. This ain't cheap, folks.
The number of OE (optical-electrical) conversions is staggering, and at every step electricity is wasted and heat is created. Each one of those ports probably needs an OE interface module too; today the standard is called 'SFP', and you'll typically pay around $1000 for each one, depending on the job it does. At least you can buy in them in bulk.
The net result is that a great deal of resources, captial, and energy go into everything BUT directly addressing the computational load of interest to the business or customer... today that's the best we can do, because the ASIC's used in commodity equipment are really designed to max out at or below 10Gbps per port.
And yes, some would call this 'a racket'.
I've been part of several project to build, or re-build mid-sized datacenters, and the spend on the phyiscal network is ALWAYS at least $1M. Just over half is spent on switching and routing hardware, and just less than half is spent on structured cable, of which the fibre portion is the lions share.
If you can't tell by now, it's my opinion that the majority of this spend, perhaps 80%, is only to service architectural requirements driven by servers and applications that come with significant baggage.
Enter POET
So what exactly is POET?
It's a patented process and design for making semiconductors, ie chips, using materials friendly to optoelectronics and compute applications. If you've read this far, you should probably read the whole intro page about ODIS and POET.
And the big deal is....?
- Low power / low heat; the energy savings to the industry... errr... globe are tremendous.
- Very fast switching - less messing around with signal port channeling and mux/demux as we do now on 10Gbps and DWM networks; drive fewer lines at higher speeds; ie in the 100-500Gbps range. We can certainly drive 100Gbps across single links today, but not at commodity pricing.
- Doesn't mind being in outer space (ie the DoD and NASA are funding it now - here is a summary)
- Packaging - the idea of a VCSEL has been around for a while, but POET, as a fab process (and slightly different way of controlling the laser) has made the promise that compute and optical can finally live on the same chip, within the same package.
This is big; a chip could talk to other chips, or a single-chip server could talk to anything off-chip, like storage, network, or other servers via an on-chip optical interface at higher speeds and lower power than ever possible before.
If you squint just a bit you might see a future where there is no fundamental difference between the chip acting as the CPU and the chip acting as the network switch. Squint further and you'll realize that with current generation virtual machine capabilities overlayed on such hardware there is no reason to think there should be a hardware difference in the chips; in fact it's probably more advantageous to just have a homogeneous array of chips, interconnected by down facing detector & VCSEL arrays to optical waveguides on the carrier PCB... but that's another story.
I'll explore some other applications in future posts, like optical switching, which completes the puzzle, and where I think Opel is in their roadmap to commercialization.
Afterword
A loooong time ago we lived in Sherwood Park, Alberta, and a family friend knew the gang that started Myrias Research. I wish they were still around (as a company) today...