Samsung briefly explains reason behind Galaxy Note7 explosions

For those who are in interested in knowing the behind-the-scene stuff that’s causing some Samsung Galaxy Note7s to explode, it’s worth knowing that the South Korean company has briefly shared the outcome of its internal investigation on the issue.
We all know that faulty batteries (from Samsung own subsidiary SDI) are to blame for the issue, but the company – on its UK website – goes a step further to explain that overheating of the battery cell occurs “when the anode-to-cathode came into contact.” The tech giant further says that “it is a very rare manufacturing process error.”

In addition, Samsung revealed that of the 35 explosions that have come to its notice through its customer service centers, 17 have happened in Korea, 17 in the US, and 1 in Taiwan. The company, however, said that “there have been no reported injuries globally.”

Finally, Samsung declined to officially confirm that it’s Samsung SDI whose batteries are at fault. When asked about this, the company just said, “Unfortunately we will not be able to confirm this as we work with several suppliers.”

Source:www.gsmarena.com

Advertisements

SSD vs. HDD: What’s the Difference?

A hard drive is a hard drive, right? Not exactly. We lay out the differences between SSD and HDD storage to help you figure out which type is the best choice.

SSD vs HDD: What's the Difference update 810

Until recently, PC buyers had very little choice for what kind of file storage they got with their laptopultrabook, or desktop. If you bought an ultrabook or ultraportable, you likely had a solid-state drive (SSD) as the primary drive (C: on Windows, Macintosh HD on a Mac). Every other desktop or laptop form factor had a hard disk drive (HDD). Now, you can configure your system with either an HDD, SSD, or in some cases both. But how do you choose? We explain the differences between SSDs and HDDs, and walk you through the advantages and disadvantage of both to help you come to your decision.

HDD and SSD Explained
The traditional spinning hard drive (HDD) is the basic nonvolatile storage on a computer. That is, it doesn’t “go away” like the data on the system memory when you turn the system off. Hard drives are essentially metal platters with a magnetic coating. That coating stores your data, whether that data consists of weather reports from the last century, a high-definition copy of the Star Wars trilogy, or your digital music collection. A read/write head on an arm accesses the data while the platters are spinning in a hard drive enclosure.

An SSD does much the same job functionally (saving your data while the system is off, booting your system, etc.) as an HDD, but instead of a magnetic coating on top of platters, the data is stored on interconnected flash memory chips that retain the data even when there’s no power present. The chips can either be permanently installed on the system’s motherboard (like on some small laptops and ultrabooks), on a PCI/PCIe card (in some high-end workstations), or in a box that’s sized, shaped, and wired to slot in for a laptop or desktop’s hard drive (common on everything else). These flash memory chips differ from the flash memory in USB thumb drives in the type and speed of the memory. That’s the subject of a totally separate technical treatise, but suffice it to say that the flash memory in SSDs is faster and more reliable than the flash memory in USB thumb drives. SSDs are consequently more expensive than USB thumb drives for the same capacities.

A History of HDDs and SSDs
Hard drive technology is relatively ancient (in terms of computer history). There are well-known pictures of the infamous IBM 350 RAMAC hard drive from 1956 that used 50 24-inch-wide platters to hold a whopping 3.75MB of storage space. This, of course, is the size of an average 128Kbps MP3 file, in the physical space that could hold two commercial refrigerators. The IBM 350 was only used by government and industrial users, and was obsolete by 1969. Ain’t progress wonderful? The PC hard drive form factor standardized in the early 1980s with the desktop-class 5.25-inch form factor, with 3.5-inch desktop and 2.5-inch notebook-class drives coming soon thereafter. The internal cable interface has changed from Serial to IDE to SCSI to SATA over the years, but it essentially does the same thing: connects the hard drive to the PC’s motherboard so your data can be processed. Today’s 2.5- and 3.5-inch drives use SATA interfaces almost exclusively (at least on most PCs and Macs). Capacities have grown from multiple megabytes to multiple terabytes, an increase of millions fold. Current 3.5-inch HDDs max out at 6TB, with 2.5-inch drives at 2TB max.

275The SSD has a much more recent history. There was always an infatuation with non-moving storage from the beginning of personal computing, with technologies like bubble memory flashing (pun intended) and dying in the 1970s and ’80s. Current flash memory is the logical extension of the same idea. The flash memory chips store your data and don’t require constant power to retain that data. The first primary drives that we know as SSDs started during the rise of netbooks in the late 2000s. In 2007, the OLPC XO-1 used a 1GB SSD, and the Asus Eee PC 700 series used a 2GB SSD as primary storage. The SSD chips on low-end Eee PC units and the XO-1 were permanently soldered to the motherboard. As netbooks, ultrabooks, and other ultraportables became more capable, the SSD capacities rose, and eventually standardized on the 2.5-inch notebook form factor. This way, you could pop a 2.5-inch hard drive out of your laptop or desktop and replace it easily with an SSD. Other form factors emerged, like the mSATA miniPCI SSD card and the DIMM-like SSDs in the Apple MacBook Air, but today many SSDs are built into the 2.5-inch form factor. The 2.5-inch SSD capacity tops out at 1TB currently, but will undoubtedly grow as time goes by.

Advantages and Disadvantages
Both SSDs and HDDs do the same job: They boot your system, store your applications, and store your personal files. But each type of storage has its own unique feature set. The question is, what’s the difference, and why would a user get one over the other? We break it down:

Price: To put it bluntly, SSDs are very expensive in terms of dollar per GB. For the same capacity and form factor 1TB internal 2.5-inch drive, you’ll pay about $75 for an HDD, but as of this writing, an SSD is a whopping $600. That translates into eight-cents-per-GB for the HDD and 60 cents per GB for the SSD. Other capacities are slightly more affordable (250 to 256GB: $150 SSD, $50 HDD), but you get the idea. Since HDDs are older, more established technologies, they will remain less expensive for the near future. Those extra hundreds may push your system price over budget.

Maximum and Common Capacity: As seen above, SSD units top out at 1TB, but those are still very rare and expensive. You’re more likely to find 128GB to 500GB units as primary drives in systems. You’d be hard pressed to find a 128GB HDD in a PC these days, as 250 or even 500GB is considered a “base” system in 2014. Multimedia users will require even more, with 1TB to 4TB drives as common in high-end systems. Basically, the more storage capacity, the more stuff (photos, music, videos, etc.) you can hold on your PC. While the (Internet) cloud may be a good place to share these files between your phone, tablet, and PC, local storage is less expensive, and you only have to buy it once.

Solid State Drive

Speed: This is where SSDs shine. An SSD-equipped PC will boot in seconds, certainly under a minute. A hard drive requires time to speed up to operating specs, and will continue to be slower than an SSD during normal operation. A PC or Mac with an SSD boots faster, launches apps faster, and has higher overall performance. Witness the higher PCMark scores on laptops and desktops with SSD drives, plus the much higher scores and transfer times for external SSDs vs. HDDs. Whether it’s for fun, school, or business, the extra speed may be the difference between finishing on time or failing.

Fragmentation: Because of their rotary recording surfaces, HDD surfaces work best with larger files that are laid down in contiguous blocks. That way, the drive head can start and end its read in one continuous motion. When hard drives start to fill up, large files can become scattered around the disk platter, which is otherwise known as fragmentation. While read/write algorithms have improved where the effect in minimized, the fact of the matter is that HDDs can become fragmented, while SSDs don’t care where the data is stored on its chips, since there’s no physical read head. SSDs are inherently faster.

Durability: An SSD has no moving parts, so it is more likely to keep your data safe in the event that you drop your laptop bag or your system is shaken about by an earthquake while it’s operating. Most hard drives park their read/write heads when the system is off, but they are flying over the drive platter at hundreds of miles an hour when they are in operation. Besides, even parking brakes have limits. If you’re rough on your equipment, a SSD is recommended.

Availability: Hard drives are simply more plentiful. Look at the product lists from Western Digital, Toshiba, Seagate, Samsung, and Hitachi, and you’ll see many more HDD model numbers than SSDs. For PCs and Macs, HDDs won’t be going away completely, at least for the next couple of years. You’ll also see many more HDD choices than SSDs from different manufacturers for the same capacities. SSD model lines are growing in number, but HDDs are still the majority for storage devices in PCs.

Seagate 600 Pro: Angle

Form Factors: Because HDDs rely on spinning platters, there is a limit to how small they can be manufactured. There was an initiative to make smaller 1.8-inch spinning hard drives, but that’s stalled at about 320GB, since the MP3 player and smartphone manufacturers have settled on flash memory for their primary storage. SSDs have no such limitation, so they can continue to shrink as time goes on. SSDs are available in 2.5-inch laptop drive-sized boxes, but that’s only for convenience, as stated above. As laptops become slimmer and tablets take over as primary Web surfing platforms, you’ll start to see the adoption of SSDs skyrocket.

Noise: Even the quietest HDD will emit a bit of noise when it is in use from the drive spinning or the read arm moving back and forth, particularly if it’s in a system that’s been banged about or in an all-metal system where it’s been shoddily installed. Faster hard drives will make more noise than slower ones. SSDs make virtually no noise at all, since they’re non-mechanical.

Overall: HDDs win on price, capacity, and availability. SSDs work best if speed, ruggedness, form factor, noise, or fragmentation (technically part of speed) are important factors to you. If it weren’t for the price and capacity issues, SSDs would be the winner hands down.

As far as longevity goes, while it is true that SSDs wear out over time (each cell in a flash memory bank has a limited number of times it can be written and erased), thanks to TRIM technology built into SSDs that dynamically optimizes these read/write cycles, you’re more likely to discard the system for obsolescence before you start running into read/write errors. The possible exceptions are high-end multimedia users like video editors who read and write data constantly, but those users will need the larger capacities of hard drives anyway. Hard drives will eventually wear out from constant use as well, since they use physical recording methods. Longevity is a wash when it’s separated from travel and ruggedness concerns.

The Right Storage for You
So, does an SSD or HDD (or a hybrid of the two) fit your needs? Let’s break it down:

HDDs
• Multimedia Mavens and heavy downloaders: Video collectors need space, and you can only get to 4TB of space cheaply with hard drives.
• Budget buyers: Ditto. Plenty of space for cheap. SSDs are too expensive for $500 PC buyers.
• Graphics Arts: Video and photo editors wear out storage by overuse. Replacing a 1TB hard drive will be cheaper than replacing a 500GB SSD.
• General users: Unless you can justify a need for speed or ruggedness, most users won’t need expensive SSDs in their system.

SSDs
• Road Warriors: People that shove their laptops into their bags indiscriminately will want the extra security of a SSD. That laptop may not be fully asleep when you violently shut it to catch your next flight. This also includes folks that work in the field, like utility workers and university researchers.
• Speed Demons: If you need things done now, spend the extra bucks for quick bootups and app launches. Supplement with a storage SSD or HDD if you need extra space (see below).
• Graphics Arts and Engineering: Yes, I know I said they need HDDs, but the speed of a SSD may make the difference between completing two proposals and completing five for your client. These users are prime candidates for dual-drive systems (see below).
• Audio guys: If you’re recording music, you don’t want the scratchy sound from a hard drive intruding. Go for the quieter choice.

Now, we’re talking primarily about internal drives here, but the same applies to external hard drives. External drives come in both large desktop form factors and compact portable form factors. SSDs are becoming a larger part of the external market as well, The same sorts of affinities apply, i.e., road warriors will want an external SSD over a HDD if they’re rough on their equipment.

Hybrid Drives and Dual-Drive Systems
Back in the mid 2000s, some of the hard drive manufacturers like Samsung and Seagate theorized that if you add a few GB of flash chips to a spinning HDD, you’d get a so-called “hybrid” drive that approaches the performance of an SSD, with only a slight price difference with a HDD. All of it will fit in the same space as a “regular” HDD, plus you’d get the HDD’s overall storage capacity. The flash memory acts as a buffer for oft-used files (like apps or boot files), so your system has the potential for booting faster and launching apps faster. The flash memory isn’t directly accessible by the end user, so they can’t, for example, install Windows or Linux on the flash chips. In practice, drives like the Seagate Momentus XT work, but they are still more expensive and more complex than simple hard drives. They work best for people like road warriors who need large storage, but need fast boot times, too. Since they’re an in-between product, they don’t necessarily replace dedicated HDDs nor SSDs.

In a dual-drive system, the system manufacturer will install a small SSD primary drive (C:) for the operating system and apps, while adding a large storage drive (D: or E:) for your files. While in theory this works well, in practice, manufacturers can go too small on the SSD. Windows itself takes up a lot of space on the primary hard drive, and some apps can’t be installed on the D: or E: drive. Some capacities like 20GB or 32GB may be too small. For example, the Polywell Poly i2303 i5-2467M comes with a 20GB SSD as the boot drive, and we were unable to complete testing, let alone install usable apps, since there was no room left over once Windows 7 was installed on the C: drive. In our opinion, 80GB is a practical size for the C: drive, with 120GB being even better. Space concerns are like any multi-drive system: You need physical space inside the PC chassis to hold two (or more) drives.

Western Digital Black2 Dual Drive

Last but not least, an SSD and an HDD can be combined (like Voltron) on systems with technologies like Intel’s Smart Response Technology. SRT uses the SSD invisibly to help the system boot faster and launch apps faster. Like a hybrid drive, the SSD is not directly accessible by the end user; rather, it acts as a cache for files the system needs often (you’ll only see one drive, not two). Smart Response Technology requires true SSDs, like those in 2.5-inch form factors, but those drives can be as small as 8GB to 20GB and still provide performance boosts. Since the operating system isn’t being installed to the SSD directly, you avoid the drive space problems of the dual-drive configuration mentioned above. On the other hand, your PC will require space for two drives, a requirement that may exclude some small form factor desktops and laptops. You’ll also need the SSD and your system’s motherboard to support Intel SRT for this scenario to work. All in all it’s an interesting workaround.

It’s unclear whether SSDs will totally replace traditional spinning hard drives, especially with shared cloud storage waiting in the wings. The price of SSDs is coming down, but still not enough to totally replace the TB of data that some users have in their PCs and Macs. Cloud storage isn’t free either: you’ll continue to pay as long as you want personal storage on the Internet. Home NAS drives and cloud storage on the Internet will alleviate some storage concerns, but local storage won’t go away until we have ubiquitous wireless Internet everywhere, including planes and out in the wilderness. Of course, by that time, there may be something better. I can’t wait.

How to build your own 180TB RAID6 storage array for $9,305

Storage Pod 4.0, side by side

We’ve all been there: Your computer’s 2-terabyte drive has filled itself up again, and it’s time to delete some movies and uninstall some games. But wait! Instead of deleting data like some kind of chump, I have a better idea: Build your own 180-terabyte RAID6 storage array, and never run out of space ever again. With 180 terabytes of storage under the hood, never again will the Steam Summer Sale give you storage anxiety; never again will you have to decide which files get backed up. The best part? Building your own 180TB storage array will cost you just $9,305.

The 180TB storage array, like many of our other hard drive-related stories, comes from our friends at Backblaze. Backblaze is a cloud-based backup company that provides unlimited storage for a fixed monthly price — a service it can only provide because it builds its own Storage Pods, instead of using commercial devices that are well over twice the price. Backblaze originally open sourced the specifications of Storage Pod 2.0 in 2011 — and now, as the company continues to grow and seek out cheaper and higher density storage solutions, it has just published the details of Storage Pod 4.0.

First, the specifications. Storage Pod 4 consists of a custom-designed 4U server case containing 45 4TB hard drives, a single 850W power supply, and a motherboard/CPU/RAM that runs the controller software. The centerpiece of the installation, though, is a pair of Rocket 750 40-port SATA PCIe host adapter expansion boards, priced at around $700 each. These specs are a big step up from Storage Pod 2.0 and 3.0, which required two PSUs, and nine five-drive NAS backplanes that then connected to three SATA expansion cards. By wiring the hard drives directly into the host adapter, Backblaze says Storage Pod 4 has between four and five times the throughput of its predecessor.

Rocket 750 40-port SATA expansion cards, inside the Backblaze Storage Pod 4.0

Rocket 750 40-port SATA expansion cards, inside the Backblaze Storage Pod 4.0

If you want to build your own Storage Pod, Backblaze does provide a complete parts list and blueprint, but it would be a pretty epic endeavor. Instead, Backblaze suggests that you buy an empty Storinator chassis from 45 Drives, which is based on the Backblaze Storage Pod, and fill it up with your own drives. This method will cost you around $12,500, rather than Backblaze’s cheaper in-house cost of $9,305. In case you’re wondering, Backblaze is currently filling its Storage Pods with Hitachi (HGST) and Seagate 4TB hard drives, but it wants to try out Western Digital’s Red drives in the near future. (Read: Who makes the most reliable hard drives?)

The Thailand hard drive crisis, three years on

What’s odd about Storage Pod 4.0, however, is that its cost-per-gigabyte is almost identical to Storage Pod 2.0, released back in July 2011. Storage Pod 2.0 provided 135TB at a cost of $7,394, or 5.5 cents per gig; Storage Pod 4.0 is 180TB for $9,305, or 5.1 cents per gig.

Hard drive cost per gigabyte, from 2009 to 2013

If the Thailand flooding of 2011 hadn’t occurred, we’d probably be around 3 cents per gig. After the floods, hard drive prices shot up, and it took almost 30 months for hard drive prices to start trending below their July 2011 level. This is why, after almost three years, 4TB drives are still the most cost effective (before the Thailand floods, the cost-per-gig was almost halving every two years, in line with Moore’s law).

The good news, though, is that 5- and 6-terabyte drives are now on the market — they’re just incredibly expensive. The WD/HGST helium-filled 6TB drive is one of the most exciting hard drives to hit the market in the last decade — but priced at around $750, or 12 cents per gig, it just doesn’t make economical sense for large storage arrays.

For a complete parts list, chassis blueprint, and info on how to build your own Storage Pod 4.0, hit up the Backblaze website. It’s worth noting that Backblaze’s controller/RAID6 software is proprietary — so if you do go down the DIY route, you’d probably end up using something like FreeNAS, or rolling your own software. (Let’s face it, 180TB storage arrays aren’t really for home users; this is enterprise- and supercomputing-level stuff).

Facebook’s facial recognition technolgy is now as accurate as the human brain, but what now?

Facial recognition markers

Facebook’s facial recognition research project, DeepFace (yes really), is now very nearly as accurate as the human brain. DeepFace can look at two photos, and irrespective of lighting or angle, can say with 97.25% accuracy whether the photos contain the same face. Humans can perform the same task with 97.53% accuracy. DeepFace is currently just a research project, but in the future it will likely be used to help with facial recognition on the Facebook website. It would also be irresponsible if we didn’t mention the true power of facial recognition, which Facebook is surely investigating: Tracking your face across the entirety of the web, and in real life, as you move from shop to shop, producing some very lucrative behavioral tracking data indeed.

The DeepFace software, developed by the Facebook AI research group in Menlo Park, California, is underpinned by an advanced deep learning neural network. A neural network, as you may already know, is a piece of software that simulates a (very basic) approximation of how real neurons work. Deep learning is one of many methods of performing machine learning; basically, it looks at a huge body of data (for example, human faces) and tries to develop a high-level abstraction (of a human face) by looking for recurring patterns (cheeks, eyebrow, etc). In this case, DeepFace consists of a bunch of neurons nine layers deep, and then a learning process that sees the creation of 120 million connections (synapses) between those neurons, based on a corpus of four million photos of faces. (Read more about Facebook’s efforts in deep learning.)

Once the learning process is complete, every image that’s fed into the system passes through the synapses in a different way, producing a unique fingerprint at the bottom of the nine layers of neurons. For example, one neuron might simply ask “does the face have a heavy brow?” — if yes, one synapse is followed, if no, another route is taken. This is a very simplistic description of DeepFace and deep learning neural networks, but hopefully you get the idea.

Sylvester Stallone, going through DeepFace's forward-facing algorithm

Sylvester Stallone, going through DeepFace’s forward-facing algorithm. Notice how the slight tilt/angle in (a) is corrected in (g). (d) is the “average” forward-looking face that is used for the transformation. Ignore (h), it’s unrelated.

Anyway, the complexities of machine learning aside, the proof is very much in the eating: DeepFace, when comparing two different photos of the same person’s face, can verify a match with 97.25% accuracy. Humans, performing the same verification test on the same set of photos, scored slightly higher at 97.53%. DeepFace isn’t impacted by varied lighting between the two photos, and photos from odd angles are automatically transformed (using a 3D model of an “average” forward-looking face) so that all comparisons are done with a standardized, forward-looking photo. The research paper indicates that performance — one of the most important factors when discussing the usefulness of a machine learning/computer vision algorithm — is excellent, “closing the vast majority of [the] performance gap.”

Facebook facial recognition fail

Facebook tries to impress upon us that verification (matching two images of the same face) isn’t the same as recognition (looking at a new photo and connecting it to the name of an existing user)… but that’s a lie. DeepFace could clearly be used to trawl through every photo on the internet, and link it back to your Facebook profile (assuming your profile contains photos of your face, anyway). Facebook.com already has a facial recognition algorithm in place that analyzes your uploaded photos and prompts you with tags if a match is made. I don’t know the accuracy of the current system, but in my experience it only really works with forward-facing photos, and can produce a lot of false matches. Assuming the DeepFace team can continue to improve accuracy (and there’s no reason they won’t), Facebook may find itself in the possession of some very powerful software indeed. [Research paper: “DeepFace: Closing the Gap to Human-Level Performance in Face Verification“]

What it chooses to do with that software, of course, remains a mystery. It will obviously eventually be used to shore up the existing facial recognition solution on Facebook.com, ensuring that every photo of you on the social network is connected to your account (even if they don’t show a visible tag). From there, it’s hard to imagine that Zuckerberg and co will keep DeepFace purely confined to Facebook.com — there’s too much money to be earnt by scanning the rest of the public web for matches. Another possibility would be branching out into real-world face tracking — there are obvious applications in security and CCTV, but also in commercial settings, where tracking someone’s real-world shopping habits could be very lucrative. As we’ve discussed before, Facebook (like Google) becomes exponentially more powerful and valuable (both to you and its share holders) the more it knows about you.

Intel’s new Quark CPU core is one step towards a new, ARM-like foundry model

Intel's new Quark

A little over five years ago, Intel took the wraps off a new low-power CPU core, codenamed Silverthorne. That chip, branded as Atom, became the centerpiece of Intel’s low-power initiative, even if there’ve been a few missteps along the way. Now, Intel is aiming for an even lower power market with a new family of products — except this time, the company is bending previously ironclad rules about how it builds and licenses its own hardware, becoming a bit like its mobile archnemesis, ARM.

What’s smaller than an Atom? Meet Quark.

Wrong Quark

.Intel Quark

That’s better.

Technical details on the Intel Quark family are still limited. We know the chip is x86-compatible and built on a 32nm process. Intel claims the Quark SoC will be 1/5 the size of the Atom SoC, and draw a tenth the power. Those are going to be challenging goals to hit and they suggest Quark is a truly embedded part aimed at markets Intel hasn’t previously deigned to enter. This isn’t a competitor to the Cortex-A family — it aims to compete with ARM’s Cortex-M series.

What makes Quark unique from a design level is that this is the first Intel chip that’s fully synthesizable and designed to integrate with third-party IP blocks. That implies that a customer can use Quark and hook it to custom I/O, graphics, storage, or WiFi/3G radios of their own choosing. For now, Intel intends to retain manufacturing control over the entire process, but Quark is apparently designed as a CPU design that other foundries could license and build long-term.

A new foundry strategy

There’s been talk for years that Intel might become a TSMC or GlobalFoundries competitor by throwing open its doors to all and sundry. This has always been unlikely given that Intel’s ace in the hole has always been its own foundry technology — a technology that it typically reserves for its own products. What the company is doing with Quark is leveraging its own IP in a way that lets it offer customizable hardware to potential customers without giving up control of either its processor IP or its own manufacturing capability.

While Intel has said that other foundries would eventually be able to build Quark chips in theory, this should be understood as a hypothetical “nothing standing in the way,” rather than a long-term licensing intention. Unlike Atom, which involves a great deal of hand-tuning and customized silicon, Quark is designed to be simpler and easier to produce. That means there are fewer roadblocks between the chip and long-term production if another foundry should seek a license. Any license Intel granted would undoubtedly be customer-specific and linked to a particular version of the core — there’s no sign that Intel is going to license the wider x86 model anytime soon — and to be fair, there’s no sign anyone else wants an x86 license.

Remember the early MIDs of 2007/2008, such as this Gigabyte M528? Probably not.

Remember the early MIDs of 2007/2008, such as this Gigabyte M528? Probably not.

Will Quark be adopted? That’s hard to say. It’s easy to forget that Intel did a lot of heavy lifting around Atom, MID (Mobile Internet Device), and netbooks. While MIDs never took off, they definitely got the ball rolling as far as commissioning more mobile products and tablet-like designs. The company didn’t just put Atom out there as a design and invite folks to build something — it took an active part in creating the products. There’s no ready-made market for Quark, though it’s possible that Intel will integrate Quark cores as coprocessors in future designs.

What is clear is that Intel is going after a huge existing market in low-cost synthesizeable cores with entrenched competitors and customers that already enjoy a great deal of choice. MIPS and ARM are players in this space, alongside embedded manufacturers like Freescale.

Supercomputing Technology and Applications Program starts operation

iVEC is pleased to announce that the Supercomputing Technology and Applications Program (STAP) team is now operational. The STAP team will help ensure that WA researchers are able to effectively use the current iVEC supercomputing resources (Epic and Fornax) and are ready to use the resources at the Pawsey Centre (now under construction).

The initial STAP team members are Chris Bording, Dr Daniel Grimwood, Dr Christopher Harris, Dr Nicola Varini and Dr Rebecca Hartman-Baker. The STAP team is lead by Dr George Beckett.

These specialists will work with research groups, helping them to integrate the iVEC systems into their computing workflows and to exploit the potential of Petascale supercomputing. The STAP team will also be working with the iVEC Education Program to develop training materials and courses, and with the iVEC eResearch Program to grow the use of supercomputing resources into new research areas. More information on all of these activities will follow over the next few months.

To contact the STAP team directly email stap-enquiries@ivec.org email icon .

As part of the preparations for this, the STAP team members will be added to current project groups on Epic and Fornax. This is to enable them to quickly resolve any issue that a user may have. This will be the default policy going forward. If you have any concerns about this, or any other questions or comments about the Program, please contact the team on the email address above.

10 Types of Computers

Start the Countdown

There are a lot of terms used to describe computers. Most of these words imply the size, expected use or capability of the computer. While the term computer can apply to virtually any device that has a microprocessor in it, most people think of a computer as a device that receives input from the user through a mouse or keyboard, processes it in some fashion and displays the result on a screen.

Do you know the different types of computers? See the next page to get started with the first computer type.

PC

The personal computer (PC) defines a computer designed for general use by a single person. While a Mac is a PC, most people relate the term with systems that run the Windows operating system. PCs were first known as microcomputers because they were a complete computer but built on a smaller scale than the huge systems in use by most businesses.

Desktop

A PC that is not designed for portability is a desktop computer. The expectation with desktop systems are that you will set the computer up in a permanent location. Most desktops offer more power, storage and versatility for less cost than their portable brethren.

Laptop

Also called notebookslaptops are portable computers that integrate the display, keyboard, a pointing device or trackball, processor, memory and hard drive all in a battery-operated package slightly larger than an average hardcover book.

Netbook

Netbooks are ultra-portable computers that are even smaller than traditional laptops. The extreme cost-effectiveness of netbooks (roughly $300 to $500) means they’re cheaper than almost any brand-new laptop you’ll find at retail outlets. However, netbooks’ internal components are less powerful than those in regular laptops.

PDA

Personal Digital Assistants (PDAs) are tightly integrated computers that often use flash memory instead of a hard drive for storage. These computers usually do not have keyboards but rely on touchscreen technology for user input. PDAs are typically smaller than a paperback novel, very lightweight with a reasonable battery life. A slightly larger and heavier version of the PDA is the handheld computer.

Workstation

Another type of computer is a workstation. A workstation is simply a desktop computer that has a more powerful processor, additional memory and enhanced capabilities for performing a special group of task, such as 3D Graphics or game development.

Server

A computer that has been optimized to provide services to other computers over a networkServers usually have powerful processors, lots of memory and large hard drives. The next type of computer can fill an entire room.

Mainframe

In the early days of computing, mainframes were huge computers that could fill an entire room or even a whole floor! As the size of computers has diminished while the power has increased, the term mainframe has fallen out of use in favor of enterprise server. You’ll still hear the term used, particularly in large companies to describe the huge machines processing millions of transactions every day.

Supercomputer

This type of computer usually costs hundreds of thousands or even millions of dollars. Although some supercomputers are single computer systems, most are composed of multiple high performance computers working in parallel as a single system. The best known supercomputers are built by Cray Supercomputers.

 Wearable Computer

The latest trend in computing is wearable computers. Essentially, common computer applications (e-mail, database, multimedia, calendar/scheduler) are integrated into watches,cell phones, visors and even clothing. For more information see these articles on computer clothingsmart watches and fabric PCs.