Update Oracle EVP John Fowler has damned disk technology, praised flash and said tape has a future in an Information Week interview and yesterday's Oracle strategy webcast.
Fowler is Oracle's top dog for server and storage systems and it's worth paying attention to what he likes and dislikes, since systems embodying these views will be coming out of Oracle over the next few years. Oracle's overarching idea is to build integrated and converged stacks of server, storage and systems software so that data centre applications can chew through vastly more data than they do today and at a ferocious pace. Fowler described a Sparc processor strategy leading to systems in 2015 with 128 cores, 16,384 threads and the ability to process 120 million database transactions per minute.
How will disk-based storage keep up with that?
He thinks that the separation that has grown up between servers and storage - meaning storage area networks (SAN) and filers - is wrong. EMC and NetApp prospered because they built great products but it's time to bring storage back in-house and integrate it with the servers and the applications.
Networked storage was developed in relative isolation from servers and applications such as Oracle databases and SAP, Siebel and PeopleSoft software. Fowler reckons that his development people can now understand the bandwidth, memory I/Os and latency needed by such software running on Oracle's servers, and develop storage that delivers what the applications and servers will need.
Disks - 'old and fail a lot'
This is what Fowler thinks about disks: "They're really old, and they fail a lot", likening them to TV tubes and the transistor, and saying that there's going to be a tidal wave of development in storage. In the supercharged world of server processing he's outlining there will be no willingness to have threads and cores wait for disk I/O. Like today's supercomputers it's going to be all about building a pipeline with the right capacity, latency and speed to feed and receive data from the hundreds of cores executing Oracle or SAP apps and keeping them busy. Logically there is no place for disk as a primary data storage tier in this scenarios.
He is planning for a 15-fold increase in storage controller throughput and 50-fold improvement in storage controller capacity by 2015.
Fowler reckons that servers and storage are changing as we speak and will change even more as his development people get the deep insight into what the big iron applications need. There's going to be a constant escalation in system performance because of what his people will do in developing servers and storage. He also says storage is going to come down in price over the next five years, saying it's one of the most expensive and challenging to manage items in data centres today, and that is going to change.
Flash and silos - Oracle wants it all
He talked about putting 50TB of data in flash so that entire data models can fit in memory and their processing can occur in real-time instead of after the event. This is the TMS RamSan idea, having a database in solid state storage, but in Oracle's view there shouldn't be a separate supplier shipping a separate box; the solid state storage should be integrated and supplied by Oracle. It appears that storage will become more directly attached to a system. We might envisage an Oracle database and sundry middleware server setup which has links to its own online storage, which is not shared with other applications. Could we be seeing a return of the silo here - a more integrated silo but a silo nonetheless?
Thus, when application speed and manageability is the goal, if local storage silos make the app and servers go faster they will be chosen. Talking of separate storage and servers and of EMC and NetApp, Fowler said there are lots of technical reasons why you could build servers and storage better for greater performance and manageability. There will be casualties as the interface between servers and storage is refactored.
There is a downside for EMC, NetApp and others here; Oracle implicitly wants to decrease their attach rate in Oracle shops. As ever, Ellison wants more of a customer's IT spend devoted to Oracle and less to other suppliers.
Oracle will continue, it appears, to sell separate servers and storage products to run other vendors' applications, but its big focus is going to be on building Exadata-like tailored and integrated appliances for its own and other enterprise suppliers' big iron applications.
Tape - 20TB by 2015
There is a re-emphasis on tape, where Oracle's recent StreamLine 8500 tape library announcements have been underwhelming compared to what competitors such as SpectraLogic are doing. Tape offers terrific value for storing bulk data for a long time - far more than disk. Fowler thinks that the StreamLine 8500 library will have a two-exabyte capacity in 2015 and a bandwidth of 1,380TB/hour. It will use tape reels that have 20TB capacities instead of today's 1TB. Hold on a moment, where has this suddenly come from?
The last we heard about Oracle's tape plans was back in May, when the firm said a new generation of tape and libraries would come inside 12 months. The new tape format would have a higher capacity and more throughput than the existing StorageTek T10000B with its 1TB and 120MB/sec throughput. Now we have roadmap to a 20TB reel.
Assuming this is not LTO tape but the StorageTek format and that the T1000C or whatever it will be called comes in the next few months, then how many generations does that give us to reach 20TB?
The LTO consortium has, we think, this roadmap timetable (assuming 2.5 years between generations): LTO-6 with 3.6TB raw in late 2012, LTO-7 with 6.4TB raw in early 2015, and LTO-8 with 12.8TB raw in late 2017. Sun is proposing to jump from 1TB today, the T10000B, to 20TB in five years!
Let's assume the first new format, possibly called the T10000 C (T10K C) comes out very soon and at least doubles capacity to 2TB, then, at a 2.5 years between generations rate, we have two generations to reach 20TB, say a 10TB product (T10K D) in mid 2013, and a 20TB (T10K E) one in 2015. That is, by tape standards, a fantastically aggressive schedule.
IBM and Fujifilm have announced a research development producing a 35TB tape, so in theory 20TB capacity is reachable.
Tape library - all a bit hush-hush
A 2EB StreamLine library would have 100,000 slots for 20TB tapes, if that's 2EB of raw capacity. The throughput is said to be 1,380TB/hour, but we don't know how many drives there would be or their throughput. We are looking at the library doing 0.3833TB/sec. If there were 10 drives that means each would do 0.03833TB/sec, compared to today's T10K B doing 120MB/sec. This seems a rather startling increase in data throughput. Having 20 drives would stretch drive performance less but we have no idea how many drives there will be, so speculation is fruitless.
No doubt StreamLine 8500 customers will be getting the good roadmap capacity and throughput news via non-disclosure sessions.
All in all Oracle's storage media views can be summed up as: disk crap, flash great and tape wonderful. Well, Oracle hates tape really but it's a necessary evil because nothing else comes close to its ability to store masses of data cheaply. It remains to be seen how Oracle will treat everyday disk storage but we imagine something like a twin-tier concept of flash and bulk SATA drives might appeal.
Update Details of Oracle's roadmap for tape formats, drives and the SL8500 tape library have emerged and are in a separate story. It is a 3-generation job by the way. ®