Thursday, 29 August 2013

PS4 and Xbox One user interfaces are all about simplicity and speed

Xbox One and PS4 InterfacesWe’re only three months away from the launch of both the Xbox One and PS4, and we’re starting to get a better sense of how the consoles will actually work. We’ve seen the controllers, the cameras, and the consoles themselves, but we’ve still only seen a few minutes worth of footage of one of the most important aspects of a device: the user interface. Actual speed, ease of navigation, and the potential for annoying ads are still somewhat difficult to judge from these sparse details, but we can already see that Sony and Microsoft have learned from their previous mistakes.

Xbox One

Xbox One Home
The home screen on the Xbox One is extremely simple. On the left, you’re presented with the games and apps you’ve used most recently. On the right, Microsoft has some “recommendations” for you. Of course, that’s little more than just a way to present you with advertisements. Even so, the layout is very clean and basic — very inviting for people easily scared away by UI complexity. The biggest question I have about this screen is how it will populate the recently played section on first boot. Will it be filled with empty slots, or will MS take this opportunity to get some extra Netflix and Hulu Plus advertising in there?
Xbox One Pins
With a familiar Metro-like grid layout, here we have the “pins” menu on the Xbox One. Just like the Xbox 360, this will serve as a favorites list for the games and apps you have installed on your console. More interestingly, the section on the right of the screen is serving as a sort of hub of frequently used features. Checking achievements, switching to live TV, and Bing-branded search are all accessible through the pins menu. Hopefully, users will be able to replace the drab black background with something more lively like the 360′s themes.
In this video, Microsoft‘s Yusuf Mehdi breaks down the interface in more detail. With voice, gesture, or the controller, you can quickly navigate the Xbox One dashboard very much like the Xbox 360′s. The standard categories like apps, games, and music are all still available, but Redmond has added a new section called “trending” in an attempt to help surface high quality content. From this menu, you can quickly see what media your friends are consuming. You’re also saddled with an aggregation of the most popular content from all users combined, so your milage may vary.
Without a doubt, the biggest improvement here is the speed at which everything is happening. Mehdi says a command, and the Xbox One almost instantly switches apps. The extremely generous 3GB of RAM dedicated to system resources really makes the interface seem properly responsive — standing in stark contrast to the sluggish Xbox 360 UI. Reportedly, the Xbox One is utilizing three separate operating systems — two of which are virtualized — to get everything working properly together. By dedicating a fixed amount of resources to specific tasks, the Xbox One should be capable of running apps and playing games at the same time with no penalty on performance. It remains to be seen if the Xbox team can actually pull it off on day one, but this demo is promising.

PlayStation 4

XMB
The Xross Media Bar (XMB) is gone, and has been replaced with the PlayStation Dynamic Menu. As you can see from this screenshot, Sony is still holding onto the same aesthetic, though. The row of icons is very similar to the PS3‘s interface, but selecting one doesn’t conjure a column of options. Instead, you’re quickly popped into a different menu with its own row of icons. Thankfully, both text and voice chat seem to be integrated prominently in the main menu, so communication should be very fast this generation. Frankly, that’s an area in which Sony lagged woefully behind during the PS3′s lifespan.
PS4 What's New
In the PS4′s “What’s New” menu, you can clearly see some similarity between the PS4 and Xbox One interfaces. Dynamically sized boxes filled with your friends’ latest accomplishments could easily fit with Microsoft’s current aesthetic. While the larger boxes are certainly nice for screenshots and videos, a smaller list-like format would probably serve the trophy and other text-based notifications better. Surely, we’ll see Sony tweak its layouts as more feedback comes in nearing the platform launch on November 15th.
As you can tell from this Gamescom demo, Sony is hell bent on keeping its UI just as snappy as Microsoft is. Jumping from menu to menu is extremely quick, but more impressive is how easy it is to jump into a live game. In this video, you can see Shuhei Yoshida go from watching a live multiplayer game of Killzone: Shadow Fall to joining that specific match in a matter of seconds. Keep in mind, this demo is staged in such a way to present the PS4 in the most positive light. It would be a huge disappointment if this kind of speed isn’t possible in the real world, so Sony has put itself on the hook to deliver this kind of impressive user experience. Let’s keep our fingers crossed. After dealing with increasingly languid consoles for the past eight years, this clear focus on sheer speed brings a smile to my face.

The PS4′s halo effect on the PS Vita is kicking into overdrive

PS4 and Vita
We’re now less than three months away from the PS4 launch, and Sony’s momentum in the gaming industry has only increased. Countless developers are lining up to support the PS4, the Xbox One continues to struggle with its identity, and thePlayStation Vita’s value is seemingly getting better by the day. With all of this good news surrounding the Tokyo company, it entirely possible that the positive reception of the PS4 will have a significant impact on sales of the PS Vita.
Since the PS4 was announced, the PS Vita’s value proposition has increased substantially. While the Vita already has limited PS3remote play, PS3 cross-buy, and a line-up of respectable indie titles, it’s clearly the PS4 that is driving the noticeable increase in interest. With built-in remote play functionality for almost every PS4 game, cross-console party chat, and rumors of a sweet $500 bundle, the Vita is primed to start its second life as a second screen for the PS4.
PS4 UIWith all of the goodwill Sony has accumulated with the PS4 this year, it seems like the Vita could benefit heavily from the so-called “halo effect.” The most notable example of this psychological phenomenon is, undoubtedly, Apple’s iPod. In the mid-aughts, the iPod exploded in popularity, and that led to an increase in sales for the company’s other products. As Sony turns the tides in the home console market, perhaps that popularity will translate to the traction the Vita so desperately needs to remain a viable platform.
Over in Redmond, Microsoft has been doing its damnedest to sell the Surface RT andWindows Phone 8 as proper second screens for Xbox 360 and Xbox One. Unfortunately, the SmartGlass initiative just isn’t nearly as compelling as the Vita. Sure, you can do a handful of neat activities with SmartGlass, but it just doesn’t have the same kind of “buy-once; play-anywhere” functionality that the Vita has on offer. Even better, Sony will be offering a competing iOS and Android app that takes on SmartGlass’ features head-to-head, so the uphill battle for Redmond is even steeper. Microsoft still has a lot of catching up to do after its very public identity crisis, and there doesn’t seem to be any path forward for competing directly with the remote play features that Sony is trumpeting.
Now that the Vita has seen its first price drop to $199, this upcoming holiday season could serve as the handheld’s true make-or-break moment. Sony has put in the grueling behind-the-scenes work, and turned the Vita into something worth noticing over the last year. In the face of lackluster sales, Sony has seemingly done the impossible and made gamers actually care about the Vita. All we can do now is sit back and wait to see if that translates into increased sales.

2013 Honda Fit review: A good small car with amazing cup holders but modest tech

2013 Honda Fit SportWhen it comes to low-cost subcompact cars, you don’t get much choice in tech options beyond hoping the USB jack comes standard and Bluetooth is available. All you can really do is pick the car that’s the most fun to drive or has a lot of room on the inside. The Honda Fit in its final year wins on both counts and is as good as it gets. For now.
The other subcompact of note in 2013 is the redesigned Nissan Versa Note hatchback. The Versa has a 360-degree camera system, tires that beep when you fill them to the right pressure, and slightly more room. The Fit comes with USB as standard in the entry model, it’s more fun to drive, and the cup holders are fantastic. Take your pick. Neither has enough technology for us to pick one as the best subcompact car you can buy. Check back in 2014 when the third-generation Fit arrives with the promise of an 86 mpg Honda Fit hybrid version.
2012 Honda Fit Sport

2013 Fit has the tech you need, nothing more

The 2013 Honda Fit comes with stability control and anti-lock brakes standard, along with six airbags, or one for every two feet of length. The Advanced Compatibility Engineering makes extensive use of high-strength steel to disperse crash forces. It’s as good as a 13-foot car can be in a crash.
2013 Honda Fit SportThe other Honda Fit tech is USB (one jack, standard) and navigation, Bluetooth, and satellite radio (all optional). That’s it. The only other option is paint color. Your seating option is black fabric. Henry Ford would be proud (“any color you want, as long as it’s black”). With all Fit models, you get a single 12-volt accessory jack, so bring a multi-way adapter. 120-volt option? Bring that, too.
This Honda fits (so to speak) nicely in tight urban parking spaces, at just 162 inches (4105 mm) long and 2,600 pounds (1180 kg). But unlike the shorter and more cramped Mini Cooper, the Fit has pretty good room in back and its “Magic Seats” let you fold the rear seat cushion and seat back forward to create a largish cargo space.

On the road

The Fit is fun to drive around town, energetic if not real quick to accelerate to highway speed or go up hills, and quite acceptable cruising on the highway at steady speeds. The tank is just 10.6 gallons and I found the Fit Sport’s EPA rating with the 117-hp engine and five-speed automatic of 27-33-30 mpg (city-highway-combined) right on target, meaning you’ll need to stop for fuel every 300 miles, tops. In a car this small, you might hope for closer to 40 mpg.
Compared to the Mercedes-Benz S-Class ($100,000), Audi A8 ($80,000), Hyundai Equus ($65,000) and even the Honda Civic ($25,000) I’ve driven recently, this was the noisiest car. No surprise, but then you can have five Fits for the price of one S-Class. The Fit was also the most fun to drive. If you get the Fit Sport and the five-speed automatic, you also get paddle shifters.
2013 Honda Fit Sport
The steering wheel is leather-covered (Fit Sport) and the steering wheel buttons are decently sized. The instrument panel is Spartan and the color scheme of red dials, blue lettering, and black background is hard to make out in the daytime. The air conditioning lacks a temperature setting (just a warmer-colder knob), so you wind up fiddling with the controls during your trip.

Navigation, USB, and cup holders

The navigation system works but it feels dated and the buttons are small. Voice input lets you enter addresses while you’re moving. Navigation is an option on the Fit Sport only. The Versa has better navigation and it’s apparently cheaper; the actual cost is masked because they’re both parts of packages.
The USB jack, in a 2013 Honda Fit Sport
The USB jack, in a 2013 Honda Fit Sport
The USB jack is built into the upper glovebox, on a short tether cord, and it will play most music devices or USB thumb drives; it does not have enough power to charge an iPadthat’s running. Automakers are slowly upping the output from 1 to 2 amps to work better with tablets.
In addition to cup holders in the center console and door pockets, there are cup holders high up on the instrument panel right by the doors. This is the best location I’ve ever seen for cup holders, and in a tiny car with finite space, no less.
2013 Honda Fit Sport

When tech gets cheaper, even subcompacts will benefit

The next generation of subcompacts will need to incorporate some of the driver assistance features offered on many midsize and larger cars, particularly blind spot detection and lane departure warning. It’s easy to say that you can just turn your head in a car this small and with this much window glass. But people don’t do that, especially older people whose neck muscles aren’t so flexible any more. Over-50 buyers are the hidden demographic. Automakers talk about urban millennials as the hot demographic for small cars. The parents of that demographic, with no kids at home, are in the Fit’s buying set, too, especially in a two-car household.
You can put BSD, LDW and FCW (forward collision warning) on a car for $500 now, and if you use the EyeSight optical system I saw on the Subaru Forester, throw in adaptive cruise control and pedestrian safety for not much more.
To be competitive and remain atop the subcompact space, it would help for the third-generation, 2014 Honda Fit to incorporate:  improved NVH (noise, vibration, harshness), better gasoline-engine fuel economy (40 not 30 mpg), more than five speeds in the non-hybrid automatic transmission, a more modern navigation system with a backup camera, and Bluetooth that doesn’t require you to buy navigation. The next Fit is likely to get a city safety/pedestrian safety feature that will stop the car short, automatically, in urban situations, and it’s possible the same electronics would provide forward collision warning (not stopping) at higher speeds, as well as lane departure warning.

Should you buy? This year?

Among subcompacts, the Honda Fit is the most enjoyable car to drive outside the Mini Cooper, which costs more and has no back seat to speak of, so it’s not really a competitor. The base Fit is about $16,000 with a manual gearbox, USB, and four-speaker audio. Add $1,500 for the more desirable Fit Sport and six-speaker audio. The Fit Sport with automatic is just under $19,000 and $1,500 more with navigation and Bluetooth, which is pricy for what you get.
In the US, there’s also an EV version of the current Honda Fit, rated at 118 mpg-e offered as a three-year, $260 per month lease including a 240-volt charger. Honda says the range on a full charge is 80-130 miles, about typical for EVs other than a Tesla. There is a Fit Hybrid offered elsewhere but not in the US; the 2014 Honda Fit Hybrid is making news because of its reported 86 mpg fuel economy.
The 2014 Honda Fit
The 2014 Honda Fit
My suggestion if you’re buying: Start with the Honda Fit. Look at the Versa if you just want to put the car in drive and go somewhere, or if the navigation system is important. Also look at the other subcompacts that are all decent but with less cockpit/luggage room, such as the Ford Fiesta, Hyundai Accent, Kia Rio, and Mazda 2. The subcompact Chevrolet Spark has an innovative phone-linked navigation system, GoGo Link, but the car itself isn’t a prime contender.
You also should look at compact cars, since they’re not much more costly. On the other hand, the Versa and Fit are both competitive in seating capacity and cargo space inside, even if they’re subcompacts based on length. The Volkswagen Golf is the closet compact car competitor based on its hatchback shape.
The 2013 Honda Fit is a lot like the 2009-2012 Honda Fit, so think used as well as new. Make sure any used Fit you’re checking out has stability control (it’s standard now but wasn’t on the base Fit before 2011). Stability control is the car’s most important safety feature after seat belts.
At Extreme Tech, we’ve been designating the best tech cars in their categories. For subcompacts, neither the Honda Fit nor the Nissan Versa has the weight of enough technology for that designation. Leave it at this: The Versa and Fit have the best combination of space and affordability in a subcompact car. The Fit adds a measure of fun-to-drive and holds the promise of more tech just down the road.

EVs are better and cheaper, so why aren’t they selling? (Actually, they are)

NissanLeafHeroFrontSurprise: Sales of electric cars are up, despite what you may have heard. At the same time, the majority of EV makers are having trouble keeping sales up. The reason is simple: Two brands, Nissan and Tesla, make up most EV sales. Everyone else registered sales in the hundreds of units for the first half of 2013. Depending on how you crunch the numbers, the market for EVs can be considered healthy or anemic. Here are five reasons why the market for EVs is getting better. Or not.

1. Sales are way up in the US

Sales more than tripled in the US in the first half compared to the first half of 2012, led by the Nisssan Leaf and Tesla Model S. Renault has had similar success selling EVs in Europe. In the US, sales jumped from about 7,000 to almost 25,000.
But… that’s still a drop in the bucket compared to the eight million US car sales through June. As for EVs, only Tesla and Nissan are doing well. Of the 25,000 EV sales, the Nissan Leaf accounted for almost 12,000. Tesla doesn’t report sales until the SEC forces them to, but based on Tesla’s repeated claim that it will sell 20,000 Model S cars this year, that’s up to 10,000 in the first half. Then it drops way off: The Ford Focus EV sold just over 1,000 units, the Mitsubishi MiEV just under. Who’s left? Among the Toyota RAV4 EV, Honda Fit EV, Smart for Two ED, and Chevrolet Spark EV, there was just over 1,000 sales among them.
Nissan Leaf battery pack

2. EV prices are coming down

Nissan cut the price of the Leaf by $6,400, or 18%. Ford reduced the price of the Focus EV by $2,000 for cash buyers and reduced the effective cost by more than $10,000 on three-year leases. Mitsubishi offered a $10,000 rebate on the MiEV. That makes electric vehicles price competitive with comparable combustion-engine cars and hybrids.
But… some of the price cuts were driven by more efficient technology and the reduced cost of battery packs. Much was marketing driven, meaning the price cuts were necessary to keep cars selling. Some cuts are temporary. Some are to move out stock that just won’t sell. Sudden price cuts of several thousand dollars hurt the brand among customers who just bought and feel like chumps and the resale value also takes a hit.
Tesla Supercharger station

3. Virtually all driving fits within the 80-100 mile range of EVs

Most people drive 20-40 miles a day. Word-of-mouth helps generate sales. There are more public charging stations than ever. ZipCar,Relay Rides and EV dealers all have programs to get you a real car — sorry, combustion-engine car — for the occasional long weekend trip. So does the existing infrastructure called Avis, Hertz and National.
But… every time automakers tell you not to worry about range anxiety, you worry about range anxiety. It’s real when you forgot to charge the car last night and try to limp in to work. Batteries degrade over time, air conditioning and seat heaters affect range, and sometimes the public charging stations are full up or broken. The happiest EV owners are the greens and those who own two cars, one of which isn’t an EV. It’s easy to be happy with your Nissan Leaf when you also have a Nissan Armada at the ready.

4. Electric is cheaper than gas and diesel

The big electric company that rips you off every month, or so it seems, produces energy about three times as cheaply as a combustion engine. At the national average of 11-12 cents per kilowatt hour (1 kWh equals 1,000 watts used for one hour), for the average EV that’s the same as finding gasoline for a little over a dollar a gallon. Everyone supplies a 120-volt (overnight) charging cable and the price of 240-volt chargers that refill the batteries in three hours is coming down.
But… it’s only cheaper on a per-mile basis if you don’t factor in the premium you pay for the EV. It’s cheaper only if you use electricity at home. Some workplaces and public charging stations, the ones that got a lot of publicity, do it for free (it helps if you work at Google and other public-minded companies, but you probably don’t). Other parking lots tack on a convenience or flat rate charging fee that wipes out the advantages of cheap electricity. For buyers who have only 120-volt charging, it’s still overnight, and for everyone, it’s a hassle to plug the charging cable in every time you pull into your garage at night. We are still years away from inductive charging where a transformer plate in the garage floor works with a transformer plate on the underside of your EV so long as they’re in reasonable proximity to each other.
Tesla Model S

5. More and better EVs are coming soon here and abroad

Tesla’s rumored Model E will likely be priced closer to the mainstream, perhaps around $30,000, in 2015. The BMW i3 is the first mass-production vehicle making extensive use of carbon fiber to reduce weight. The Mercedes-Benz SLS AMG Electric Drive will generate 580 hp and reach 60 mph in less than 4 seconds. These halo cars will generate sales and publicity for EVs. Around the world, there’s a huge market for megacity vehicles where the owner seldom would leave town: Beijing, Mexico City, Tokyo.
But… no buts (almost!) Cars like the BMW i3 will have a premium compared to non-BMWs, but not compared to a BMW combustion engine car with the same carrying capacity. To hedge its bets and in response to customer demand, BMW will also offer the i3 with a gasoline helper engine, making the i3 more like the Chevrolet Volt yet capable of going up to 200 miles. Expectations are that US customers will opt for the PHEV version.
Sales of EVs hit 100,000 earlier this year, something of a milestone. EVs are getting better and cheaper, short-term because of discounting, longer term because tech drives the cost of everything down. Near-term challenges remain. The 100,000 includes pioneers and green chauvinists who already have their EVs and won’t be in the market again for a couple years. Most of the sales included federal incentives of $7,500 that could go away. In California, EVs are popular because they carry the right to drive in the HOV lane and that, too, could go away when EVs clog the HOV lanes just as hybrids did a decade ago.

Wednesday, 28 August 2013

First human brain-to-brain interface allows remote control over the internet, telepathy coming soon

Human-to-human brain-to-brain interface setupThe first human-to-human, brain-to-brain noninvasive interface has been created by researchers at the University of Washington. The system allows one researcher to remotely control the hand of another researcher, across the internet, merely by thinking about moving his hand. The researchers are already looking at a two-way system, to allow for a more “equitable” telepathic link between the two human brains, and the telepathic communication of complex information.
Despite the massive and mostly-not-understood complexity of the human brain, the UW brain-to-brain interface is actually quite simple, relying on tools that are regularly used in the fields of medicine and brain-computer interfaces (BCIs). The first human brain (the sender) is connected to a computer via an EEG-based BCI. The second human brain (the receiver) is connected to another computer via a Magstim transcranial magnetic stimulation (TMS) machine — the same kind of TMS setup that has been somewhat successful in treating depression, and other mental maladies. When the sender plays a game and thinks about firing a cannon at a target, the EEG picks it up, sends the signal across the internet to the second computer, and the TMS stimulates the region of the receiver’s motor cortex that controls hand movement. This causes the receiver’s index finger to twitch, firing the cannon and blowing up the target. This process is almost instantaneous.
TMS is a lot like transcranial direct current stimulation (tDCS), which we have written about extensively. Where tDCS passes an electrical current through your brain, affecting the neurons that the electrons travel through, TMS uses electromagnetic induction to create a similar effect. Both tDCS and TMS can be used to either stimulate regions of the brain, useful for brain-to-brain interfaces or increasing the activity of regions of the brain associated with depression, or to reduce the activity of a region, which might help with the treatment of other conditions, such as Parkinson’s. Like tDCS, TMS is completely noninvasive, and so far it appears to be completely safe.
The University of Washington (UW) researchers, led by Rajesh Rao and Andrea Stocco, have basically connected two quite simple and well-understood systems into a novel and slightly terrifying human-to-human interface. It is very similar to Harvard’s human-to-mouse interface, except they used focused ultrasound (FUS) instead of TMS to trigger the motor cortex. That the UW setup works isn’t all that surprising — the main thing is that that, for the first time, a human is on the receiving end, which raises some interesting ethical and moral issues.
Brain-to-brain diagram, including the EEG, computers, network, and TMS setup
Chantel Prat, another researcher involved with the work, is quick to try and dispel any concerns. “I think some people will be unnerved by this because they will overestimate the technology,” Prat says. “There’s no possible way the technology that we have could be used on a person unknowingly or without their willing participation.” This is an overly simplistic way of looking at it, though. Yes, the current setup requires both users to be fully consenting — but in the future, it’s not hard to imagine wireless implants that allow for full telepathy and perhaps a limited range of remotely triggered actions. (See: Brown University creates first wireless, implanted brain-computer interface.) As always with technology, we don’t need to worry so much about the hardware itself — but rather how it might be subverted, once a significant number of people have brain-to-brain interfaces installed.
Moving forward, Rao and Stocco are now working on transmitting more complex information between two human brains. This could be done fairly simply with encoded pulses — think brain-to-brain Morse code — or they could go the complex route and try to stimulate the brain into creating actual images and thoughts. There’s still a lot of work to be done to decode the human brain, so it will be very interesting to see how future human-to-human brain-to-brain interfaces are implemented.

Curiosity turns on self-driving software, can now navigate Mars on its own

Curiosity's tire tracks on MarsRoughly 200 million miles away on the surface of the Red Planet, NASA’s Curiosity just one-upped Google: It autonomously drove across the surface of Mars without human supervision. This is the first time that Curiosity has turned on its self-driving “autonav” software, and initial reports suggest that Curiosity navigated the treacherous surface of Mars flawlessly. The autonav software will help Curiosity reach its destination, Mount Sharp, much more quickly because the rover won’t have to wait for driving instructions from NASA — it can just plug away at the remaining kilometers autonomously.
Up until yesterday, every movement made by Curiosity has been painstakingly keyed in by NASA, usually after performing simulations here on Earth using Curiosity’s stunt double (theVehicle System Test Bed). These movements are planned by NASA engineers, who pore through photos of the terrain captured by Curiosity to seek out potential obstacles, such as big rocks or sand traps (Mars rover Opportunity famously stumbled into a sand dune in 2005, and took 40 days to extricate itself). The problem is, the cameras on board Curiosity can only see so far ahead; if there’s a dip in the ground, or the rover is going up hill, NASA can only plan a very short drive until it gets updated imagery of the rover’s surroundings.
The view from Curiosity's front-left hazcam, with Mount Sharp in the distance
The view from Curiosity’s front-left hazcam, with Mount Sharp in the distance
This is where the autonomous navigation software, or autonav for short, kicks in. Basically, Curiosity is equipped with two stereo pairs of hazard avoidance cameras (hazcams) which create a 3D map of the rover’s surroundings. This is similar to the EyeSight system implemented by the Subaru Forester, but a lot simpler than the LIDAR system employed by Google’s self-driving cars (Mars doesn’t have any fast-moving obstacles, making autonomous driving a lot easier.) Using this 3D map, the rover can plot an alternate course around any obstacles that aren’t safe to drive over. Yesterday, August 27, Curiosity’s 376th Martian day, the rover autonomously drove itself through a 10-meter depression which NASA could not confirm ahead of time to be safe.
A poster illustrating Opportunity's autonav software, which is very similar to Curiosity
A poster illustrating Opportunity’s autonav software, which is very similar to Curiosity (click to zoom in)
Curiosity's path across Mars to Mount Sharp
A map of Curiosity’s progress from landing, to Glenelg, to its eventual target: Mount Sharp
Since leaving Glenelg, where Curiosity confirmed that conditions on Mars could’ve once supported life, the rover has driven a grand total of 0.86 miles (1.39 km) towards its primary science target, Mount Sharp. It has around 4.46 miles (7.18 km) left to go, and will stop at a number of scientifically interesting waypoints identified by the Mars Reconnaissance Orbiter’s HiRISE camera. It will take months to reach Mount Sharp, but we should have a lot of pretty photos and interesting science to share during that period.

Honda Fit: 86 mpg from the next hyper-efficient hybrid

The 2014 Honda FitCould your next car get 86 mpg? It might if it’s a Honda. The next-generation Honda Fit subcompact will be unveiled this fall and arrive in the US in the first half of 2014. Most of the buzz over the new Fit, called the Honda Jazz in some countries) is the hybrid version, which promises a 35% improvement in fuel economy. The US currently gets the gasoline Honda Fit and EV Fit — not the hybrid Fit — but that could change with the next model.
On a Japanese test cycle, 2014 Fit Hybrid fuel economy will be on the order of 2.7 liters consumed per 100 km or 85.6 US mpg. That’s a mathematical conversion that doesn’t account for the US test cycle. But still, it could be the most efficient hybrid if and when it arrives stateside. Currently the most efficient non-EV cars sold in the US are the Toyota Prius C and Toyota Prius, each with 50 mpg combined EPA rating, 53 mpg and 51 mpg city ratings for the Prius C and Prius, respectively. The 2013 Honda Fit gets 29-31 mpg combined depending on the transmission or 33-35 mpg highway; the Honda Fit EV gets 118 mpg-e (miles per gallon equivalent), best in the category the EPA calls small station wagons.

Atkinson engine, 7-speed double clutch transmission, electric motor

Under the umbrella of Honda’s Earth Dreams Technology program, the new Fit Hybrid will be the first to employ Honda’s Intelligent Dual Clutch Drive system, or i-DCD. The gasoline power comes from a 1.5-liter, four-cylinder Atkinson cycle engine. A single electric motor is packaged with a seven-speed dual clutch transmission, linked through an intelligent power unit (IPU) to a lithium-ion storage battery. An Atkinson engine has an effectively shorter compression stroke (when the piston moves upward) than the downward power stroke, accomplished by not immediately closing each cylinder’s intake valve. It captures more of the power in the fuel-air mixture.
When starting out, the clutches disengage the gas engine and the Fit Hybrid starts off on battery power. The clutches engage the engine and gearbox under sporty (hard) acceleration and at higher speeds. When the Fit Hybrid decelerates, the gas engine is again disengaged.
Honda i-DCD diagram
Honda has claimed a 35% increase is compared to the current Fit Hybrid’s integrated motor assist (IMA) configuration. The IMA electric motor only runs when the gasoline engine runs, functioning much like a turbocharger. This is called a mild hybrid or weak hybrid configuration. A hybrid such as the Prius that can run on battery power alone is a strong hybrid, and who wouldn’t prefer strong over mild, let alone weak?
IMA may be cost-effective tech, but lots of hybrid owners want to see — and show their friends — a car that runs on battery alone for a mile or two. This is what happens when you let engineers have input into how a company runs: They pick solutions that make cost-effective sense. Sometimes the market agrees, other times not. Honda has long argued nobody needs an engine of more than six cylinders and it turns out they’re right, but that hurt the Acura brand competing against V8 offerings from Lexus, Audi, BMW and Mercedes-Benz the last two decades. Honda has also been one or two gears shy of the competition in transmissions, arguing that four or five forward gears was  fine — “look at our mpg figures, not the numbers of gears.” Now, they too are creeping upwards to as many as seven (the industry record is currently nine). The IMA hybrid is giving way to the i-DCD hybrid. At the very least, IMA served its purpose and Honda is moving on to a more efficient technology.

Three hybrid flavors from Honda

Honda will actually have three hybrid configurations going forward with one, two, and three electric motors for small, medium, and large/sporty cars. All will use lithium-ion batteries. The previous IMA hybrids used nickel-metal hydride.
The one-motor (electric motor) Intelligent Dual Clutch Drive design is what’s on the new Fit hybrid and likely other small Hondas. The dual-clutch system is an automated mechanical transmission, meaning there’s a clutch (actually, two) as on manual gearboxes, but it’s automated. The car, rather than the driver’s left foot, activates the clutch. Many European cars use DCT transmissions and they’ve all but killed off the manual gearbox among performance cars.
Honda Fit
The two-motor Intelligent Multi Mode Drive promises the highest efficiency, according to Honda researchers. It’s suited for plug-in hybrids, the ones that can go 20-40 miles on batteries before switching over to the combustion engine. It will be on the 2014 Honda Accord plug-in hybrid electric vehicle (PHEV). If there’s enough battery power, this drivetrain offers three modes. EV Mode is battery-only; it regains some of the expended energy braking and going down hills. Engine Drive is for medium and high-speed driving; the hybrid stuff is along for the ride here, with the gasoline engine directly connected by a lock-up clutch to the drive wheels. Hybrid Drive is a combination for city driving and also for extra power accelerating on the highway, where the electric motor adds boost just as a turbocharger does.
The three-motor Super-Hybrid All-Wheel-Drive mode (SH-AWD) is for performance Hondas and Acuras. It uses a V6 gas engine, has the performance of a V8, and the fuel economy of a four-cylinder, Honda says. A 3.5-liter V6 engine is up front, along with the seven-speed DCT and an electric motor to drive the front wheels. Two more electric motors in back provide power for the left and right rear wheels and torque distribution. There is no driveshaft sending mechanical power to the rear wheels, which saves space and weight. Acura’s existing SH-AWD gas-engine cars have the ability to overpower the outside wheel going around corners or on slippery patches. Depending on what kind of driver you are, SH-AWD provides an added measure of safety or performance.

Cellphone use doesn’t increase the number of car accidents, says new Carnegie Mellon study

Police car driver distraction cellular
Cops may still write you a ticket for yakking on a handheld phone while driving. But the link between cellphone use and accidents looks more tenuous if you agree with the conclusions of a recent study from Carnegie Mellon University and the London School of Economics and Political Science. Using statistics and data comparisons, the researchers found that the increased use of cellphones has led to no measurable increase in accidents. Expect this study to be hotly contested. It flies in the face of conventional wisdom and the prevailing winds in Washington that would like to see more restrictions on phone use and texting, up to and including interlocks that keep anyone in the car from using their phone.
The study was published in the American Economic Journal: Economic Policy. Saurabh Bhargava, assistant professor of social and decision sciences in CMU’s Dietrich College of Humanities and Social Sciences and Vikram S. Pathania of the London School of Economics and Political Science looked at cellular data from 2002 to 2005, apparently after the NSA was done parsing it. They identified drivers as those whose calls were regularly handed off from cell tower to cell tower. At the time, most carriers offered free calling after 9 pm. Bhargava and Pathania found motorists increased their calling by 7% at 9pm. They pulled up data on eight million car crashes in those four years as well as all fatal crashes. Their finding: no statistical link. Crashes didn’t go up when calling went up.
Crashes vs. cellphone ownership
As a bonus, they compared states that enacted handheld cellphone bans and found no difference in the crash rate before and after. There’s a damning graphic in their study that compares cellphone ownership over 20 years vs. car crashes. Ownership is steadily up, crashes and fatal crashes are steadily down.

Are they on to something, or just blowing smoke?

It’s easy to challenge the study, or at least to nibble around the edges. The mobile user data would include passengers as well as drivers. Cars are getting safer all the time and drunks are being policed off the roads more than in the past. Passing a hands-free law isn’t the same as getting people to obey it. Texting may be more distracting than making a call, as the authors themselves note. And so forth. But still, the research should be kept in mind when the the Department of Transportation and the states ponder their next steps.
“Using a cellphone while driving may be distracting, but it does not lead to higher crash risk in the setting we examined,” says Bhargava. “While our findings may strike many as counterintuitive, our results are precise enough to statistically call into question the effects typically found in the academic literature. Our study differs from most prior work in that it leverages a naturally occurring experiment in a real-world context.”
Statistical modeling of big data can be a powerful tool to prove something that would be difficult otherwise. One example: Are blacks and Hispanics on death row proportionally more than whites because they’re inherently, ah, criminalistic? Work with a big data sets showing arrest, trial, conviction, and sentencing for whites from similar socioeconomic backgrounds and similar crimes, and many researchers say the one variable that best explains the discrepancy is the defendant’s color. Conservatives and traditionalists may not like statistics and deep research to prove a point. But Nate Silver didn’t pick all 50 states right in the 2012 presidential election by flipping a quarter.
Cellphone-in-car opponents mostly rely on a 1997 article in the New England Journal of Medicine. It said cellphone use by drivers increased the risk of a crash fourfold, making it a safety hazard on par with driving drunk.

Tuesday, 27 August 2013

Xbox One graphics capabilities, odd SoC architecture, and bus bandwidth confirmed by Microsoft

Xbox One E3 press conferenceMicrosoft has finally lifted the curtain on the Xbox One, with a great deal of technical detail on display at the Hot Chips conference. For the first time, we’ve got a view into how the architecture is laid out and what its capabilities are. The chip is built on a 28nm process by TSMC and measures a sizeable (though not enormous) 363mm sq. It’s capable of running at as little as 2.5% of active power thanks to aggressive power gating — leaving the system running won’t destroy your power bill. The chip is built on TSMC’s HPM process, which is designed to offer simultaneous benefits of high performance and low leakage power.
What follows is a quick dive into what we now know.

Inside the SoC

The Xbox One SoC appears to be implemented like an enormous variant of the Llano/Piledriver architecture we described for the PS4. One of our theories was that the chip would use the same “Onion” and “Garlic” buses. That appears to be exactly what Microsoft did.
AMD APU diagram
That slide is from Llano/Piledriver. Microsoft’s slide from the Hot Chips presentation is here:
Xbox One SoC
This image courtesy of SemiAccurate. Better images hopefully coming soon.
Here’s the important points, for comparison’s sake. The CPU cache block attaches to the GPU MMU, which drives the entire graphics core and video engine. Of particular interest for our purposes is this bit: “CPU, GPU, special processors, and I/O share memory via host-guest MMUs and synchronized page tables.” If Microsoft is using synchronized page tables, this strongly suggests that the Xbox One supports HSA/hUMA and that we were mistaken in our assertion to the contraryMea culpa.
You can see the Onion and Garlic buses represented in both AMD’s diagram and the Microsoft image above. The GPU has a non-cache-coherent bus connection to the DDR3 memory pool and a cache-coherent bus attached to the CPU. Bandwidth to main memory is 68GB/s using 4×64 DDR3 links or 36GB/s if passed through the cache coherent interface. Cache coherency is always slower than non-coherent access, so the discrepancy makes sense.
Let’s talk about the cache blocks at the bottom of the image, as there’s some very curious data here.
Xbox One, front, hovering

The interesting ESRAM cache

First, there’s the fact that while we’ve been calling this a 32MB ESRAM cache, Microsoft is representing it as a series of four 8MB caches. Bandwidth to this cache is apparently 109GB/s “minimum” but up to 204GB/s. The math on this is… odd. It’s not clear if the ESRAM cache is actually a group of 4x8MB caches that can be split into chunks for different purposes or how its purposed. The implication is that the cache is a total of 1024 bits wide, running at the GPU’s clock speed of ~850MHz for 109GB/s in uni-directional mode — which would give us the “minimum” talked about. But that has implications for data storage — filling four blocks of 8MB each isn’t the same as addressing a contiguous block of 32MB. This is still unclear.
The other major mystery of the ESRAM cache is the single arrow running from the CPU cache linkage down to the GPU-ESRAM bus. It’s the only skinny black arrow in the entire presentation and its use is still unclear. It implies that there’s a way for the CPU to snoop the contents of ESRAM, but there’s no mention of why that capability isn’t already provided for on the Onion/Garlic buses and it’s not clear why they’d represent this option with a tiny black arrow rather than a fat bandwidth pipe.
Even with this new information, the use and capabilities of the ESRAM remain mysterious. It’s not clear what Microsoft expects it to be used for — if it’s for caching GPU data, why break it into 8MB chunks, and why does the CPU have a connection to it?

CPU, GPU, what we now know

The GPU and CPU blocks look like we expected and detailed in our Xbox One graphics preview. DirectX 11.1+ is listed as supported and the GPU stats are matched well to the prediction that this was a Bonaire-derived core. Microsoft claims a total of 15 non-CPU processing blocks, which works out to 12 for the GPU, two for audio, and an “other” possibly related to I/O or Kinect. The CPU, as previously reported, is an eight-core Jaguar variant. Microsoft claims the audio engine contains multiple discrete function blocks with their own impressive hardware stats, but a description of audio capability is beyond the scope of this discussion.
Xbox One SoC
Save for the clockspeed bump on the GPU, the data from earlier leaked slides was accurate.
The big picture takeaway from this is that the Xbox One probably is HSA capable, and the underlying architecture is very similar to a super-charged APU with much higher internal bandwidth than a normal AMD chip. That’s a non-trivial difference — the 68GB/s of bandwidth devoted to Jaguar in the Xbox One dwarfs the quad-channel DDR3-1600 bandwidth that ships in an Intel X79 motherboard. For all the debates over the Xbox One’s competitive positioning against the PS4, this should be an interesting micro-architecture in its own right. There are still questions regarding the ESRAM cache — breaking it into four 8MB chunks is interesting, but doesn’t tell us much about how those pieces will be used. If the cache really is 1024 bits wide, and the developers can make suitable use of it, then the Xbox One’s performance might surprise us.
This technical unveil confirms much of what we suspected about the chip, but throws us for some curves in other areas. All in all, a great session.

Storage Pricewatch: Get out your wallet, hard drive and SSD prices have dropped

Samsung Flash SSDThe last time we dug into storage prices was back in April, when the trends for both hard drives and SSDs showed a modest decline for the former as prices returned to pre-flood levels and a relatively stable track for the latter. Since then, we’ve seen reports of volatility in the SSD market.
We’ve teamed up with Dynamite Dataonce again to examine long-term price trends and what they show us about the storage market.

The SSD split

SSD prices are simultaneously increasing and decreasing depending on which capacity you focus on. This is a fairly new trend. Previously, high capacity drives were typically the most expensive as these products were aimed at enterprise/business customers or offered the highest performance money could buy. Low capacity drives were typically cheaper, since they offered fewer features or lower performance. Now, that’s changing.
SSD Price GB
When we break out the price per GB at the 128, 256, and 480-512GB capacities, we see three different cost trends. The 128GB SSDs have been the most volatile, with average prices ranging between $1.05-1.13/GB over the past six months. Costs had fallen to around $1.07 from a high of $1.12 in late July. 128GB drives tend to be a touch slower than 256-512GB products since they have fewer memory channels, and they are often based on older NAND and slower controllers.
We’ve dropped 60GB drives entirely from this comparison due to age and price; 60GB drives are often $1.50 per GB or higher now as manufacturers shift to higher capacities. The 256GB drives have also edged upwards compared to where they were at the beginning of the year. Prices have peaked and fallen again several times in the past few months, with total pricing currently sitting just under $1 per GB. At the 512GB level we see the slowest, most consistent trend — a general reduction in price with current prices hovering around 80 cents per GB. There are several factors driving this. Scan NewEgg and you’ll find a number of cheap, high-capacity SSDs based on old technology as well. This has apparently become something of a popular move — build out slower versions of a drive with huge storage space and offer it at a bargain price.

Where prices go from here

There are two big reasons to be optimistic about the future of SSD pricing. First, there’s TLC NAND. While I was initially dubious about the long-term efficacy of TLC (triple level cell) memory, Samsung’s 840 and 840 Evo have convinced me that this NAND has a strong place in consumer products. The Evo, in particular, is a great example of how pairing a TLC drive with a small SLC (single level cell) NAND cache can improve performance dramatically and guard against problematic wearing.
Second, there’s vertical NAND, also called V-NAND. As the name suggests, this is NANDthat tilts the cell structure up on edge. V-NAND has only just entered production, so it’s not going to hit consumer markets yet, but the long-term feedback is promising. It’s not clear if TLC and V-NAND can currently be deployed in the same hardware, but the ITRS has made it clear that it expects V-NAND to drive SSD prices downwards over the next few years. This kind of improvement is critically important given that NAND flash faces major scaling problems below the 19nm node.
One thing to note is that while pricing in these segments as a whole has only fluctuated by ~8%, fluctuation on some drives is considerably more volatile.
The below chart shows the biggest movers and shakers for the entire SSD market since February. Keep in mind that pricing can vary dramatically depending on drive model but, again, the largest capacities tend to show less shifting than the higher ones.
I suspect the 100GB and 110GBs listed here might be OCZ’s RevoDrives, which have been volatile of late. Still, it can pay to keep an eye on favorite drives when looking for a deal.
SSD Volatile
Prices should continue to edge downwards through 2013 and into 2014. I don’t expect any enormous drops but if Samsung’s 840 Evo continues to win favor, it could push the average selling price lower.

Hard drives: Desktop in great shape, but laptop drives show troubling trends

Desktop hard drives have returned to pre-flood levels and continued downwards from that point. I don’t have an updated version of our hard drive graph from April, but a quick check of NewEgg shows 2TB drives starting at $79.99, or roughly 3 cents per GB. 4TB drives, even WD’s high performance Black family, are available for 6-7 cents per GB. That gap means hard drives remain more than an order of magnitude cheaper than SSDs. SSD performance has increased much more rapidly than spinning disc performance, but the rate of change may slow dramatically over the next few years. TLC and V-NAND are designed to improve NAND density and cost, but not its speed. This is particularly true with TLC, which is markedly slower than MLC.
There is, however, one negative trend I want to call out here, as it impacts laptop hard drives in particular. Back in 2009-2010, companies like Seagate confidently predicted that nearly 50% of laptop HDD volumes would be 7200RPM drives by now. Instead, the manufacturer has since announced that it intends to phase out 7200RPM drives altogether by the end of the year. Already, we see the trend kicking in — of the 212 laptop HDDs in stock at NewEgg, only a handful of them are 7200 RPM models. Seagate’s Solid State Hybrid Drives (SSHDs) are available, but they’ve staked out a price point well above $100.
Fortunately, there are still 7200RPM drives from Western Digital, but Seagate’s decision to exit that market could be a sign of things to come. While SSDs remain a far better performance choice in mobile, 7200RPM drives are still noticeably quicker than 5400RPM counterparts, and it’s somewhat frustrating to be stuck with just two choices — bottom end, or SSD.

Low temperature combustion could drive efficient, clean diesel engines back into the limelight

diesel 1Cold fusion, despite probably being impossible, is a good idea. Take a process known to create a meaningful amount of power and remove the harmful byproducts (like explosions), and you’ve got a winner in production. The same logic holds true throughout the energy world: we have solutions, but they are often saddled with too many downsides to be workable in the real world. The combustion engine is a great example of this, making use of a readily available and energy-dense fuel while creating increasingly dizzying amounts of pollution. If only we could change the process of extracting that energy, we could make fossil fuels at least moderately more sustainable.
Sandia National Laboratories has released a review article detailing the feasibility of Low Temperature Combustion (LTC) diesel engines. It should really be called lower temperature combustion though, but as you’ll see ,even a moderate decrease in heat can have a powerful effect on pollutants. Sandia concludes that LTC diesel could be a major step forward for diesel-powered industry. The study suggests that low-temperature combustion could drive diesel engines forward in a way desperately needed for both the environment and for competition with gas and electric competitors. Since so much of the industrial sector runs on diesel fuel, particularly in transportation of goods, that’s an important step forward for a technology that is often thought of as on its way out.
Two of the main pollutants in diesel exhaust are both the results of heat. One, nitrogen oxides (NOx molecules) are formed primarily at the highest-temperature point in the engine, the flame itself. When released into the air, these compounds act as inveterate pollutants but also react with sunlight and other particles in the air to form smog. Particulate matter (PM), on the other hand, results from the roiling cauldron of the combustion chamber, and the small pockets of too-high fuel concentration that arise during the combustion process.
According to the Sandia researchers, both of these problems can be mitigated by meddling with the engine’s fuel-air mixture. The engine’s own exhaust can be fed back into the fuel-air mixture to absorb some of the heat of the combustion process and ferry it away. This keeps many of the nitrogen oxide products from forming in the first place. Mixing the fuel with additional air just prior to combustion ensures there will be no fuel-rich regions to give rise to airborne particulate pollutants. It’s an elegant refinement of the basic process, and one of that would not make the engines much more expensive to produce — though it might very well make them more difficult to maintain. The air-injection mixing apparatus is not unlike the fuel injection system in current engines, which is already one of the most finnicky parts of the whole arrangement.
Diesel engine diagram
A conventional diesel engine, with a fuel injection system that results in uneven fuel density during combustion.
LTC has traditionally increased the amount of carbon monoxide released, the main greenhouse gas in car exhaust, along with unburned hydrocarbons. It’s frustrating because both of these problems reduce efficiency, which is one of the main advantages of diesel in the first place. This review looks at research showing an improvement in this area after they figured out that it was due to poor mixing of the fuel-air mixture. By splitting the whole the injection process into a main spurt followed by several smaller ones, they were able to make a more even distribution of the fuel-air mixture that burned to completion all throughout the cylinder.
This research concerns itself with diesel engines, which can be more efficient but also more polluting than gasoline engines that incorporate high-end catalytic converters to filter their exhaust. If diesel can be made less polluting in turn, the biggest objection to its use on the large scale will disappear. A largely diesel-driven nation would consume less fuel per kilometer, which translates to a big chunk knocked off total national emissions. More immediately, it offers a possible solution to high emissions of vehicles that already use diesel, particularly long-haul cargo trucks.
To build their detailed maps of the combustion process, needed to figure out things like the under-combustion of fuel nearest to the injection site, researchers have had to pioneer some truly impressive modes of analysis. The use of two-photon laser induced fluorescence was used for the first time to track the movements of combustion byproducts like formaldehyde, which in turn showed the researchers exactly how the fuel was burning, where, and when.