Month: October 2013

Supercomputing Technology and Applications Program starts operation

iVEC is pleased to announce that the Supercomputing Technology and Applications Program (STAP) team is now operational. The STAP team will help ensure that WA researchers are able to effectively use the current iVEC supercomputing resources (Epic and Fornax) and are ready to use the resources at the Pawsey Centre (now under construction).

The initial STAP team members are Chris Bording, Dr Daniel Grimwood, Dr Christopher Harris, Dr Nicola Varini and Dr Rebecca Hartman-Baker. The STAP team is lead by Dr George Beckett.

These specialists will work with research groups, helping them to integrate the iVEC systems into their computing workflows and to exploit the potential of Petascale supercomputing. The STAP team will also be working with the iVEC Education Program to develop training materials and courses, and with the iVEC eResearch Program to grow the use of supercomputing resources into new research areas. More information on all of these activities will follow over the next few months.

To contact the STAP team directly email email icon .

As part of the preparations for this, the STAP team members will be added to current project groups on Epic and Fornax. This is to enable them to quickly resolve any issue that a user may have. This will be the default policy going forward. If you have any concerns about this, or any other questions or comments about the Program, please contact the team on the email address above.


10 Types of Computers

Start the Countdown

There are a lot of terms used to describe computers. Most of these words imply the size, expected use or capability of the computer. While the term computer can apply to virtually any device that has a microprocessor in it, most people think of a computer as a device that receives input from the user through a mouse or keyboard, processes it in some fashion and displays the result on a screen.

Do you know the different types of computers? See the next page to get started with the first computer type.


The personal computer (PC) defines a computer designed for general use by a single person. While a Mac is a PC, most people relate the term with systems that run the Windows operating system. PCs were first known as microcomputers because they were a complete computer but built on a smaller scale than the huge systems in use by most businesses.


A PC that is not designed for portability is a desktop computer. The expectation with desktop systems are that you will set the computer up in a permanent location. Most desktops offer more power, storage and versatility for less cost than their portable brethren.


Also called notebookslaptops are portable computers that integrate the display, keyboard, a pointing device or trackball, processor, memory and hard drive all in a battery-operated package slightly larger than an average hardcover book.


Netbooks are ultra-portable computers that are even smaller than traditional laptops. The extreme cost-effectiveness of netbooks (roughly $300 to $500) means they’re cheaper than almost any brand-new laptop you’ll find at retail outlets. However, netbooks’ internal components are less powerful than those in regular laptops.


Personal Digital Assistants (PDAs) are tightly integrated computers that often use flash memory instead of a hard drive for storage. These computers usually do not have keyboards but rely on touchscreen technology for user input. PDAs are typically smaller than a paperback novel, very lightweight with a reasonable battery life. A slightly larger and heavier version of the PDA is the handheld computer.


Another type of computer is a workstation. A workstation is simply a desktop computer that has a more powerful processor, additional memory and enhanced capabilities for performing a special group of task, such as 3D Graphics or game development.


A computer that has been optimized to provide services to other computers over a networkServers usually have powerful processors, lots of memory and large hard drives. The next type of computer can fill an entire room.


In the early days of computing, mainframes were huge computers that could fill an entire room or even a whole floor! As the size of computers has diminished while the power has increased, the term mainframe has fallen out of use in favor of enterprise server. You’ll still hear the term used, particularly in large companies to describe the huge machines processing millions of transactions every day.


This type of computer usually costs hundreds of thousands or even millions of dollars. Although some supercomputers are single computer systems, most are composed of multiple high performance computers working in parallel as a single system. The best known supercomputers are built by Cray Supercomputers.

 Wearable Computer

The latest trend in computing is wearable computers. Essentially, common computer applications (e-mail, database, multimedia, calendar/scheduler) are integrated into watches,cell phones, visors and even clothing. For more information see these articles on computer clothingsmart watches and fabric PCs.


What are supercomputers currently used for?

Supercomputers conjure up the image of those massive, hulking, overheating machines that were the world’s introduction to computing — the ones that took up huge amounts of space to spit out computation after computation. You might be surprised to find out that even with the ubiquitous nature of the personal PC and network systems, supercomputers are still used in a variety of operations. In the next few pages, we’ll give you the skinny on what supercomputers are and how they still function in several industrial and scientific areas.

First, a little background. What makes a supercomputer so extraordinary? Well, the definition is a bit hard to pin down. Essentially, a supercomputer is any computer that’s one of the most powerful, fastest systems in the world at any given point in time. As technology progresses, supercomputers must up the ante as well.

For instance, the first supercomputer was the aptly named Colossus, housed in Britain. It was designed to read messages and crack the German code during the second World War, and it could read up to 5,000 characters a second. Sounds impressive, right? Not to denigrate the Colossus’ hard work, but compare that to the NASA Columbia supercomputer that completes 42 and a half trillion operations per second. In other words, what used to be a supercomputer now could qualify as a satisfactory calculator, and what we currently call supercomputers are as advanced as any computer can get.

There are, however, a few things that make a computer branch into “super” territory. It will usually have more than one central processing unit (CPU), which allows the computer to do faster circuit switching and accomplish more tasks at once. (Because of this, a supercomputer will also have an enormous amount of storage so that it can access many tasks at a time.) It will also have the capability to do vector arithmetic, which means that it can calculate multiple lists of operations instead of just one at a time.

Now that we have a little background on supercomputers, let’s check out what a few of them do.

Meet the Supercomputers

As we said, supercomputers were originally developed for code cracking, as well as ballistics. They were designed to make an enormous amount of calculations at a time, which was a big improvement over, say, 20 mathematics graduate students in a room, hand-scratching operations.

In some ways, supercomputers are still used for those ends. In 2012, the National Nuclear Security Administration and Purdue University began using a network of supercomputers to simulate nuclear weapons capability. A whopping 100,000 machines are used for the testing .

But it’s not just the military that’s using supercomputers anymore. Whenever you check the weather app on your phone, the National Oceanic and Atmospheric Administration (NOAA) is using a supercomputer called the Weather and Climate Operational Supercomputing System to forecast weather, predict weather events, and track space and oceanic weather activity as well .

As of September 2012, the fastest computer in the world — for now, anyway — is IBM’s Sequoia machine, which can operate 16.32 petaflops a second. That’s 16,000 trillion operations, to you. It’s used for nuclear weapon security and to make large-scale molecular dynamics calculations .

But supercomputers aren’t just somber, intellectual machines. Some of them are used for fun and games – literally. Consider World of Warcraft, the wildly popular online game. If a million people are playing WoW at a time, graphics and speed are of utmost importance. Enter the supercomputers, used to make the endless calculations that help the game go global.

Speaking of games, we can’t forget Deep Blue, the supercomputer that beat chess champion Garry Kasparov in a six-game match in 1997. And then there’s Watson, the IBM supercomputer that famously beat Ken Jennings in an intense game of Jeopardy. Currently, Watson is being used by a health insurer to predict patient diagnoses and treatments . A real jack of all trades, that Watson.

So, yes: We’re still benefiting from supercomputers. We’re using them when we play war video games and in actual war. They’re helping us predict if we need to carry an umbrella to work or if we need to undergo an EKG. And as the calculations become faster, there’s little end to the possibility of how we’ll use supercomputers in the future.


Get Happy! – Reducing Internet Latency

Latency is an increasingly important topic for networking researchers and Internet users alike. Whether trying to provide platforms for Web applications, high frequency stock trading, multi-player online gaming or ‘cloud’ services of any kind, latency is a critical factor in determining end-user satisfaction and the success of products in the marketplace. Data from Google, Microsoft, Amazon and others indicate that latency increases for interactive Web applications result in less usage and less revenue from sales or advertising income. Consequently, latency and variation in latency are key performance metrics for services these days.

But latency reduction is not just about increasing revenues for big business. Matt Mullenweg of WordPress motivates work on latency reduction well when he says, “My theory here is when an interface is faster, you feel good. And ultimately what that comes down to is you feel in control. The [application] isn’t controlling me, I’m controlling it. Ultimately that feeling of control translates to happiness in everyone. In order to increase the happiness in the world, we all have to keep working on this.”

Latency tends to have been sacrificed in favour of headline bandwidth in the way the Internet has been built. Later this year, together with the RITE ProjectSimula Research Labs and the TimeIn Project, we are sponsoring a two-day invitation-only workshop that aims to galvanise action to fix that. All layers of the stack are in scope.

More details about the workshop and how to submit a position paper are available here. Deadline for receipt of position papers is June 23.

Future-proofing Broadband: IPv6 and Security Needs for Your Network

Broadband networks are key to meeting our vision at the Internet Society:

The Internet is for everyone. And key to broadband networks’ growth and health are network addresses, given the shrinking pool of old-school IPv4 addresses, and dealing with constant security threats. There are real and important advances on both fronts that we’re excited to bring to this, our third year at the Broadband World Forum.


By now you’ve heard about IPv6 – the not-so-new Internet Protocol that provides over 340 undecillion IP addresses – and we hope many of you have deployed it on your networks already.

Had you noticed that the world’s largest content is already available over IPv6 – including the top 5 Alexa websites: Google, Facebook, YouTube, Yahoo!, Wikipedia? Netflix and YouTube both deliver content over IPv6 and they have a large share of bandwidth consumed on the global Internet.  

And, while it’s been a slow start to IPv6 deployment, we’ve seen huge changes since our first trip to BBWF.

  • 2011: World IPv6 Day had just occurred where Google, Facebook, and Yahoo! enabled IPv6 on their main websites for a 24-hour ‘test flight’ (and then turned it off). Some broadband providers had begun their deployments, but numbers were limited and experience was just starting to grow. 
  • 2012: World IPv6 Launch had begun and Google, Facebook, Yahoo!, YouTube, and Wikipedia, the five most-visited websites in the world, and thousands more had enabled IPv6 permanently. Broadband providers were rolling out substantial deployments. 
  • 2013: A year+ after World IPv6 Launch, monthly measurements are showing real progress happening at ISPs, equipment manufacturers, website providers, and enterprise organizations all across the globe.

During the IP Evolution Track on Tuesday at BBWF, we’re putting together an IPv6-focused session that will bring together several operators to share their experiences on how they deployed IPv6, the challenges they faced and overcame, and the successes they’ve enjoyed since.

Moving network traffic from IPv4 to IPv6 is the only way to alleviate pressures caused by the global depletion of available IPv4 addresses and prevent the need for expensive and unreliable Carrier Grade NATs (CGNs).


Security Workshop

We also work in the area of Internet security – everything from domain name security to email security to general network security. High profile DDOS attacks disrupt operations and grab headlines (and not the good kind). Routing infrastructure abuses and even innocent misconfigurations can be quite disruptive to a network. And spam! We’re still trying to eliminate spam from our inboxes, avoid phishing scams, and figure out how to know if and when we can trust the source of the emails we get every day.

All of these issues cost time, money, and customer confidence; a secure and resilient network is vital to continuing your existing operations and growing your businesses. With so much interconnection and interdependency on the global Internet, we must manage risk collaboratively to achieve effective security for everyone. An open Internet unveils tremendous benefits for the economy and society in general, and for individual businesses in particular. But it also brings new security challenges and risks that must be managed carefully.

Join us in Content Hub 2 on Tuesday as experts in these areas discuss the latest security issues, possible and potential solutions, and how to protect your systems, networks, employees, and customers. We’ll also discuss the role of corporate responsibility in reducing risk to the overall Internet – even when there’s no immediate business case for “doing the right thing” for the future of the Internet.