One-on-one mentoring for startup hackers Hackerbuddy.

Q: What is HackerBuddy?

A: HackerBuddy pairs up people that have startup skills (coding, marketing, design etc) with fellow hackers that could do with their advice. It’s still very much in the early stage, but I’m hoping to slowly improve it over time.

Q: When did you have the idea for HackerBuddy?

A: I had the idea for HackerBuddy around the start of December 2010 – I was really keen to learn Ruby on Rails after reading Why’s Poignant Guide to Ruby and HackerBuddy seemed like a simple enough idea to get started.

Q: How long did it take to build HackerBuddy?

A: It probably took just over a month in total, although that did include learning Ruby along the way.

I think it would have probably been much quicker with a team – especially if the team already knew Ruby – but I don’t think I would have learned anywhere near as much as I have. The good thing about working on your own is that, when you need to fix a problem, you have to work it out yourself. It’s frustrating at first, but you end up better off. Having said that, the site would probably have a much better design if there was a team working on it.

Q: How long has the site been live? How is it doing?

A: So far, it’s only been live for a few days! It’s got about 50 people using it so far, and it’s nice to see that some matches have already been made. I’d like to think that it’s helping to introduce cool new start-ups with people that would be interested to help them out a bit and give them some advice.

While I may be speaking prematurely, the site hasn’t fallen over yet which, for me, is a good sign.

Q: What are your plans for the web site? Do you plan on bringing on any team members?

A: I’m going to keep improving HackerBuddy based on any feedback that I get – I’m planning on making sure that it stays quite simple. It’s not designed to do anything particularly fancy – I quite like how it matches people up, swaps their email addresses and then just gets out of the way.

No, not for HackerBuddy – it’s only a small side project, it’s a fun project but not worthy of a team.

Q: What did you use to make your web site?

A: I used Ruby on RailsCoda and Things for Mac.

Coda seems to be an unusual choice of code editor for Ruby – I remember when I was thinking about learning Ruby, a lot of sites were recommending TextMate and were fairly against Coda. I think times may have changed since those blog posts were published, because I found Coda to be genuinely useful and really pretty slick with handling Ruby code. I may just be biased towards Coda because I’ve been using it for a while though. I really like Things for Mac as a task management tool because it’s really clean and simple, and helps me to prioritise which small and simple tasks I should work on.

Q: Tell us a little about yourself

A: I’m a search marketer by day, which means that I spend most of my time staring at Google result pages. I’m not a particularly good coder, but I plan to change that. It may take some time.

Q: Is there anything about your experiece developing this application you’re like to share

A: If you’re learning how to code – there’s only so far reading can take you. Think of a simple, small thing that you’d like to build – and then try and build it. Don’t be frustrated when you can’t get it to do what you want – you’ll crack it eventually, and you’ll be a better coder for it.

Q: Can our readers contact you with more questions?

A: Absolutely! You can track me down on Twitter, you can also get in touch from my blog and lastly, you can sign up to HackerBuddy and see if I’m available to help from my profile page.



Adafruit’s New Kid’s Show To Teach Kids About Electronics

Remember Sesame Street, the kid’s TV show where Count Dracula taught children simple mathematics and how to count? That was a long time ago and now, Adafruit, a kit-based electronics retailer, would soon be targeting the same crowd again to teach them about electronics, with the help of Muppets!


You heard it right, Circuit Playground, their new online show, along with Muppets and dolls with names like Cappy the Capacitor and Hans the 555 Timer Chip, will be teaching the basics of electronics and circuitry to children. Limor Fried, founder and chief engineer of Adafruit, will be hosting the episodes along with her team.

For a better learning experience, Adafruit has recently produced a coloring book E is for Electronics. They are also manufacturing dolls of each onscreen character and have included an add-on for the Circuit Playground iPhone/iPad app.

The episodes would be premiering on March 2013 on Google+ and Ustream. So, if you happen to have a couple of children around you, do make them watch these episodes. Who knows, Circuit Playground may just inspire them to become an engineer in the future.

Source: Wired

Project Glass: what you need to know

Project Glass: what you need to know

When Google unveiled Project Glass, the tech world instantly fell into two camps. Camp one was excited: we’re living in the sci-fi future! Camp two, though, wasn’t so happy. It’s vapourware! some said, while others worried that Google just wanted to plaster ads on the entire world. Is either camp correct? Let’s find out.

What is Google’s Project Glass?

Google’s Project Glass is Google’s attempt to make wearable computing mainstream, and it’s effectively a smart pair of glasses with an integrated heads-up display and a battery hidden inside the frame.

Wearable computing is not a new idea, but Google’s enormous bank account and can-do attitude means that Project Glass could well be the first product to do significant numbers.

When will Google Glass be released?

It looks as though Project Glass will see a public release in 2014 at the earliest. Latest news is that developers will be able to get hold of ‘explorer edition’ units at some point in 2013 with a “broad consumer offering” arriving a year later.

What’s the difference between Google Glasses and Google Goggles?

Google Goggles is software, an app that can search the web based on photos and scans. Google Glass is hardware.

How does Project Glass work?

According to well-informed Google blogger Seth Weintraub, Google’s Project Glass glasses will probably use a transparent LCD or AMOLED display to put information in front of your eyeballs. It’s location-aware thanks to a camera and GPS, and you can scroll and click on information by tilting your head, something that is apparently quite easy to master. Google Glasses will also use voice input and output.

What are the Google Glass specifications?

The New York Times says that the glasses will run Android, will include a small screen in front of your eye and will have motion sensors, GPS and either 3G or 4G data connections. Weintraub says that the device is designed to be a stand-alone device rather than an Android phone peripheral: while Project Glass can connect to a smartphone via Wi-Fi or Bluetooth 4.0, “it communicates directly with the cloud”. There is also a front-facing camera and a flash, although it’s not a multi-megapixel monster, and the most recent prototype’s screen isn’t transparent.


project glass


What will I be able to do with Google Glasses?

According to Google’s own video, you’ll be a super-being with the ability to have tiny people talking to you in the corner of your eye, to find your way around using sat-nav, to know when the subway’s closed, to take and share photographs and to learn the ukelele in a day.

OK, what will I really be able to do with Google Glass? Is Google Glass a vision of the future?

Nobody knows. The idea is to deliver augmented reality, with information that’s directly relevant to your surroundings appearing in front of you whenever you need it. For example, your glasses might tell you where the nearest decent restaurant is, book your table, invite your friends and show you how to get there, or they might provide work-related information when you’re at your desk.

What information we’ll use it for, if we use it at all, remains to be seen: like Apple’s Siri, it’s a technology with enormous potential. It could even end up in contact lenses: one of the Project Glass team, Babak Parviz of the University of Washington, recently built a contact lens with embedded electronics.

I already wear glasses. Will Google Glasses work for me?

Yes. Google is experimenting with designs that will fit over existing glasses so you don’t have to wear two lots of specs.

google glass

Is Google Glass vapourware?

The New York Times says no: Google’s got some of its very best people working on the project, and experts such as wearable computing specialist Michael Liebhold say that “In addition to having a superstar team of scientists who specialize in wearable, they also have the needed data elements, including Google Maps.”

Not everyone is convinced. Wired spoke to Blair MacIntyre, director of the Augmented Environments Lab at Georgia Tech, who said “you could not do [augmented reality] with a display like this.” MIT Media Lab researcher Pranav Mistry agreed, saying that “the small screen seen in the photos cannot give the experience the video is showing.”

There are several engineering issues – making a screen that works in darkness and in bright sunlight is tough – and mobile display technology doesn’t offer dynamic focusing, which reads your eye to deliver perfectly clear visuals. Current wearable displays have to be two feet away from your face.

There’s clearly a big gap between Google’s demo video and the actual product: Google says its photos “show what this technology could look like” and its video demonstrates “what it might enable you to do” [emphasis added by us].

What is the Project Glass price?

The NYT again: according to “several Google employees familiar with the project who asked not to be named,” the glasses are expected “to cost around the price of current smartphones.” So that’s around £500, then, possibly with the help of a hefty Google subsidy.

Is Project Glass evil?

It could be. Google’s business is about making money from advertising, and some people worry that Google Glass is its attempt to monetise your eyeballs by blasting you with ads whenever you look at something.

If you think pop-ups are annoying in a web browser, imagine them in front of your face. The ADmented Reality spoof is one of very many parodies that made us laugh.

Some of the parodies actually make a good point by showing people bumping into stuff: heads-up displays can be distracting, and there may be safety issues too. Until Google ships its self-driving car, the thought of drivers being distracted by their glasses is fairly terrifying.

There are privacy implications too. Never mind your web history: Google Glass might record everything you see and do.

Google Glass pre-order customers will get regular updates

Those people who paid Google $1,500 for the privilege of pre-ordering some Project Glass specs will be receiving “private updates” through Google+.

Will Google Glasses make me look like a dork?

Er… yes.

Google Data centers

If you’re looking for the beating heart of the digital age — a physical location where the scope, grandeur, and geekiness of the kingdom of bits become manifest—you could do a lot worse than Lenoir, North Carolina. This rural city of 18,000 was once rife with furniture factories. Now it’s the home of a Google data center.

A central cooling plant in Google’s Douglas County, Georgia, data center.
Photo: Google/Connie Zhou

Engineering prowess famously catapulted the 14-year-old search giant into its place as one of the world’s most successful, influential, and frighteningly powerful companies. Its constantly refined search algorithm changed the way we all access and even think about information. Its equally complex ad-auction platform is a perpetual money-minting machine. But other, less well-known engineering and strategic breakthroughs are arguably just as crucial to Google’s success: its ability to build, organize, and operate a huge network of servers and fiber-optic cables with an efficiency and speed that rocks physics on its heels. Google has spread its infrastructure across a global archipelago of massive buildings—a dozen or so information palaces in locales as diverse as Council Bluffs, Iowa; St. Ghislain, Belgium; and soon Hong Kong and Singapore—where an unspecified but huge number of machines process and deliver the continuing chronicle of human experience.

This is what makes Google Google: its physical network, its thousands of fiber miles, and those many thousands of servers that, in aggregate, add up to the mother of all clouds. This multibillion-dollar infrastructure allows the company to index 20 billion web pages a day. To handle more than 3 billion daily search queries. To conduct millions of ad auctions in real time. To offer free email storage to 425 million Gmail users. To zip millions of YouTube videos to users every day. To deliver search results before the user has finished typing the query. In the near future, when Google releases the wearable computing platform called Glass, this infrastructure will power its visual search results.

The problem for would-be bards attempting to sing of these data centers has been that, because Google sees its network as the ultimate competitive advantage, only critical employees have been permitted even a peek inside, a prohibition that has most certainly included bards. Until now.

A server room in Council Bluffs, Iowa.
Photo: Google/Connie Zhou

Here I am, in a huge white building in Lenoir, standing near a reinforced door with a party of Googlers, ready to become that rarest of species: an outsider who has been inside one of the company’s data centers and seen the legendary server floor, referred to simply as “the floor.” My visit is the latest evidence that Google is relaxing its black-box policy. My hosts include Joe Kava, who’s in charge of building and maintaining Google’s data centers, and his colleague Vitaly Gudanets, who populates the facilities with computers and makes sure they run smoothly.

A sign outside the floor dictates that no one can enter without hearing protection, either salmon-colored earplugs that dispensers spit out like trail mix or panda-bear earmuffs like the ones worn by airline ground crews. (The noise is a high-pitched thrum from fans that control airflow.) We grab the plugs. Kava holds his hand up to a security scanner and opens the heavy door. Then we slip into a thunderdome of data …

Urs Hölzle had never stepped into a data center before he was hired by Sergey Brin and Larry Page. A hirsute, soft-spoken Swiss, Hölzle was on leave as a computer science professor at UC Santa Barbara in February 1999 when his new employers took him to the Exodus server facility in Santa Clara. Exodus was a colocation site, or colo, where multiple companies rent floor space. Google’s “cage” sat next to servers from eBay and other blue-chip Internet companies. But the search company’s array was the most densely packed and chaotic. Brin and Page were looking to upgrade the system, which often took a full 3.5 seconds to deliver search results and tended to crash on Mondays. They brought Hölzle on to help drive the effort.

It wouldn’t be easy. Exodus was “a huge mess,” Hölzle later recalled. And the cramped hodgepodge would soon be strained even more. Google was not only processing millions of queries every week but also stepping up the frequency with which it indexed the web, gathering every bit of online information and putting it into a searchable format. AdWords—the service that invited advertisers to bid for placement alongside search results relevant to their wares—involved computation-heavy processes that were just as demanding as search. Page had also become obsessed with speed, with delivering search results so quickly that it gave the illusion of mind reading, a trick that required even more servers and connections. And the faster Google delivered results, the more popular it became, creating an even greater burden. Meanwhile, the company was adding other applications, including a mail service that would require instant access to many petabytes of storage. Worse yet, the tech downturn that left many data centers underpopulated in the late ’90s was ending, and Google’s future leasing deals would become much more costly.

For Google to succeed, it would have to build and operate its own data centers—and figure out how to do it more cheaply and efficiently than anyone had before. The mission was codenamed Willpower. Its first built-from-scratch data center was in The Dalles, a city in Oregon near the Columbia River.

Hölzle and his team designed the $600 million facility in light of a radical insight: Server rooms did not have to be kept so cold. The machines throw off prodigious amounts of heat. Traditionally, data centers cool them off with giant computer room air conditioners, or CRACs, typically jammed under raised floors and cranked up to arctic levels. That requires massive amounts of energy; data centers consume up to 1.5 percent of all the electricity in the world.

Data centers consume up to 1.5 percent of all the world’s electricity.

Google realized that the so-called cold aisle in front of the machines could be kept at a relatively balmy 80 degrees or so—workers could wear shorts and T-shirts instead of the standard sweaters. And the “hot aisle,” a tightly enclosed space where the heat pours from the rear of the servers, could be allowed to hit around 120 degrees. That heat could be absorbed by coils filled with water, which would then be pumped out of the building and cooled before being circulated back inside. Add that to the long list of Google’s accomplishments: The company broke its CRAC habit.

Google also figured out money-saving ways to cool that water. Many data centers relied on energy-gobbling chillers, but Google’s big data centers usually employ giant towers where the hot water trickles down through the equivalent of vast radiators, some of it evaporating and the remainder attaining room temperature or lower by the time it reaches the bottom. In its Belgium facility, Google uses recycled industrial canal water for the cooling; in Finland it uses seawater.

The company’s analysis of electrical flow unearthed another source of waste: the bulky uninterrupted-power-supply systems that protected servers from power disruptions in most data centers. Not only did they leak electricity, they also required their own cooling systems. But because Google designed the racks on which it placed its machines, it could make space for backup batteries next to each server, doing away with the big UPS units altogether. According to Joe Kava, that scheme reduced electricity loss by about 15 percent.

All of these innovations helped Google achieve unprecedented energy savings. The standard measurement of data center efficiency is called power usage effectiveness, or PUE. A perfect number is 1.0, meaning all the power drawn by the facility is put to use. Experts considered 2.0—indicating half the power is wasted—to be a reasonable number for a data center. Google was getting an unprecedented 1.2.

For years Google didn’t share what it was up to. “Our core advantage really was a massive computer network, more massive than probably anyone else’s in the world,” says Jim Reese, who helped set up the company’s servers. “We realized that it might not be in our best interest to let our competitors know.”

But stealth had its drawbacks. Google was on record as being an exemplar of green practices. In 2007 the company committed formally to carbon neutrality, meaning that every molecule of carbon produced by its activities—from operating its cooling units to running its diesel generators—had to be canceled by offsets. Maintaining secrecy about energy savings undercut that ideal: If competitors knew how much energy Google was saving, they’d try to match those results, and that could make a real environmental impact. Also, the stonewalling, particularly regarding The Dalles facility, was becoming almost comical. Google’s ownership had become a matter of public record, but the company still refused to acknowledge it.

In 2009, at an event dubbed the Efficient Data Center Summit, Google announced its latest PUE results and hinted at some of its techniques. It marked a turning point for the industry, and now companies like Facebook and Yahoo report similar PUEs.

Make no mistake, though: The green that motivates Google involves presidential portraiture. “Of course we love to save energy,” Hölzle says. “But take something like Gmail. We would lose a fair amount of money on Gmail if we did our data centers and servers the conventional way. Because of our efficiency, we can make the cost small enough that we can give it away for free.”

Google’s breakthroughs extend well beyond energy. Indeed, while Google is still thought of as an Internet company, it has also grown into one of the world’s largest hardware manufacturers, thanks to the fact that it builds much of its own equipment. In 1999, Hölzle bought parts for 2,000 stripped-down “breadboards” from “three guys who had an electronics shop.” By going homebrew and eliminating unneeded components, Google built a batch of servers for about $1,500 apiece, instead of the then-standard $5,000. Hölzle, Page, and a third engineer designed the rigs themselves. “It wasn’t really ‘designed,’” Hölzle says, gesturing with air quotes.

More than a dozen generations of Google servers later, the company now takes a much more sophisticated approach. Google knows exactly what it needs inside its rigorously controlled data centers—speed, power, and good connections—and saves money by not buying unnecessary extras. (No graphics cards, for instance, since these machines never power a screen. And no enclosures, because the motherboards go straight into the racks.) The same principle applies to its networking equipment, some of which Google began building a few years ago.

Outside the Council Bluffs data center, radiator-like cooling towers chill water from the server floor down to room temperature.
Photo: Google/Connie Zhou

So far, though, there’s one area where Google hasn’t ventured: designing its own chips. But the company’s VP of platforms, Bart Sano, implies that even that could change. “I’d never say never,” he says. “In fact, I get that question every year. From Larry.”

Even if you reimagine the data center, the advantage won’t mean much if you can’t get all those bits out to customers speedily and reliably. And so Google has launched an attempt to wrap the world in fiber. In the early 2000s, taking advantage of the failure of some telecom operations, it began buying up abandoned fiber-optic networks, paying pennies on the dollar. Now, through acquisition, swaps, and actually laying down thousands of strands, the company has built a mighty empire of glass.

But when you’ve got a property like YouTube, you’ve got to do even more. It would be slow and burdensome to have millions of people grabbing videos from Google’s few data centers. So Google installs its own server racks in various outposts of its network—mini data centers, sometimes connected directly to ISPs like Comcast or AT&T—and stuffs them with popular videos. That means that if you stream, say, a Carly Rae Jepsen video, you probably aren’t getting it from Lenoir or The Dalles but from some colo just a few miles from where you are.

Over the years, Google has also built a software system that allows it to manage its countless servers as if they were one giant entity. Its in-house developers can act like puppet masters, dispatching thousands of computers to perform tasks as easily as running a single machine. In 2002 its scientists created Google File System, which smoothly distributes files across many machines. MapReduce, a Google system for writing cloud-based applications, was so successful that an open source version called Hadoop has become an industry standard. Google also created software to tackle a knotty issue facing all huge data operations: When tasks come pouring into the center, how do you determine instantly and most efficiently which machines can best afford to take on the work? Google has solved this “load-balancing” issue with an automated system called Borg.

These innovations allow Google to fulfill an idea embodied in a 2009 paper written by Hölzle and one of his top lieutenants, computer scientist Luiz Barroso: “The computing platform of interest no longer resembles a pizza box or a refrigerator but a warehouse full of computers … We must treat the data center itself as one massive warehouse-scale computer.”

This is tremendously empowering for the people who write Google code. Just as your computer is a single device that runs different programs simultaneously—and you don’t have to worry about which part is running which application—Google engineers can treat seas of servers like a single unit. They just write their production code, and the system distributes it across a server floor they will likely never be authorized to visit. “If you’re an average engineer here, you can be completely oblivious,” Hölzle says. “You can order x petabytes of storage or whatever, and you have no idea what actually happens.”

But of course, none of this infrastructure is any good if it isn’t reliable. Google has innovated its own answer for that problem as well—one that involves a surprising ingredient for a company built on algorithms and automation: people.

At 3 am on a chilly winter morning, a small cadre of engineers begin to attack Google. First they take down the internal corporate network that serves the company’s Mountain View, California, campus. Later the team attempts to disrupt various Google data centers by causing leaks in the water pipes and staging protests outside the gates—in hopes of distracting attention from intruders who try to steal data-packed disks from the servers. They mess with various services, including the company’s ad network. They take a data center in the Netherlands offline. Then comes the coup de grâce—cutting most of Google’s fiber connection to Asia.

Turns out this is an inside job. The attackers, working from a conference room on the fringes of the campus, are actually Googlers, part of the company’s Site Reliability Engineering team, the people with ultimate responsibility for keeping Google and its services running. SREs are not merely troubleshooters but engineers who are also in charge of getting production code onto the “bare metal” of the servers; many are embedded in product groups for services like Gmail or search. Upon becoming an SRE, members of this geek SEAL team are presented with leather jackets bearing a military-style insignia patch. Every year, the SREs run this simulated war—called DiRT (disaster recovery testing)—on Google’s infrastructure. The attack may be fake, but it’s almost indistinguishable from reality: Incident managers must go through response procedures as if they were really happening. In some cases, actual functioning services are messed with. If the teams in charge can’t figure out fixes and patches to keep things running, the attacks must be aborted so real users won’t be affected. In classic Google fashion, the DiRT team always adds a goofy element to its dead-serious test—a loony narrative written by a member of the attack team. This year it involves a Twin Peaks-style supernatural phenomenon that supposedly caused the disturbances. Previous DiRTs were attributed to zombies or aliens.

Some halls in Google’s Hamina, Finland, data center remain vacant—for now.
Photo: Google/Connie Zhou

As the first attack begins, Kripa Krishnan, an upbeat engineer who heads the annual exercise, explains the rules to about 20 SREs in a conference room already littered with junk food. “Do not attempt to fix anything,” she says. “As far as the people on the job are concerned, we do not exist. If we’re really lucky, we won’t break anything.” Then she pulls the plug—for real—on the campus network. The team monitors the phone lines and IRC channels to see when the Google incident managers on call around the world notice that something is wrong. It takes only five minutes for someone in Europe to discover the problem, and he immediately begins contacting others.

“My role is to come up with big tests that really expose weaknesses,” Krishnan says. “Over the years, we’ve also become braver in how much we’re willing to disrupt in order to make sure everything works.” How did Google do this time? Pretty well. Despite the outages in the corporate network, executive chair Eric Schmidt was able to run a scheduled global all-hands meeting. The imaginary demonstrators were placated by imaginary pizza. Even shutting down three-fourths of Google’s Asia traffic capacity didn’t shut out the continent, thanks to extensive caching. “This is the best DiRT ever!” Krishnan exclaimed at one point.

The SRE program began when Hölzle charged an engineer named Ben Treynor with making Google’s network fail-safe. This was especially tricky for a massive company like Google that is constantly tweaking its systems and services—after all, the easiest way to stabilize it would be to freeze all change. Treynor ended up rethinking the very concept of reliability. Instead of trying to build a system that never failed, he gave each service a budget—an amount of downtime it was permitted to have. Then he made sure that Google’s engineers used that time productively. “Let’s say we wanted Google+ to run 99.95 percent of the time,” Hölzle says. “We want to make sure we don’t get that downtime for stupid reasons, like we weren’t paying attention. We want that downtime because we push something new.”

Nevertheless, accidents do happen—as Sabrina Farmer learned on the morning of April 17, 2012. Farmer, who had been the lead SRE on the Gmail team for a little over a year, was attending a routine design review session. Suddenly an engineer burst into the room, blurting out, “Something big is happening!” Indeed: For 1.4 percent of users (a large number of people), Gmail was down. Soon reports of the outage were all over Twitter and tech sites. They were even bleeding into mainstream news.

The conference room transformed into a war room. Collaborating with a peer group in Zurich, Farmer launched a forensic investigation. A breakthrough came when one of her Gmail SREs sheepishly admitted, “I pushed a change on Friday that might have affected this.” Those responsible for vetting the change hadn’t been meticulous, and when some Gmail users tried to access their mail, various replicas of their data across the system were no longer in sync. To keep the data safe, the system froze them out.

The diagnosis had taken 20 minutes, designing the fix 25 minutes more—pretty good. But the event went down as a Google blunder. “It’s pretty painful when SREs trigger a response,” Farmer says. “But I’m happy no one lost data.” Nonetheless, she’ll be happier if her future crises are limited to DiRT-borne zombie attacks.

One scenario that dirt never envisioned was the presence of a reporter on a server floor. But here I am in Lenoir, earplugs in place, with Joe Kava motioning me inside.

We have passed through the heavy gate outside the facility, with remote-control barriers evoking the Korean DMZ. We have walked through the business offices, decked out in Nascar regalia. (Every Google data center has a decorative theme.) We have toured the control room, where LCD dashboards monitor every conceivable metric. Later we will climb up to catwalks to examine the giant cooling towers and backup electric generators, which look like Beatle-esque submarines, only green. We will don hard hats and tour the construction site of a second data center just up the hill. And we will stare at a rugged chunk of land that one day will hold a third mammoth computational facility.

But now we enter the floor. Big doesn’t begin to describe it. Row after row of server racks seem to stretch to eternity. Joe Montana in his prime could not throw a football the length of it.

During my interviews with Googlers, the idea of hot aisles and cold aisles has been an abstraction, but on the floor everything becomes clear. The cold aisle refers to the general room temperature—which Kava confirms is 77 degrees. The hot aisle is the narrow space between the backsides of two rows of servers, tightly enclosed by sheet metal on the ends. A nest of copper coils absorbs the heat. Above are huge fans, which sound like jet engines jacked through Marshall amps.

The huge fans sound like jet engines jacked through Marshall amps.

We walk between the server rows. All the cables and plugs are in front, so no one has to crack open the sheet metal and venture into the hot aisle, thereby becoming barbecue meat. (When someone does have to head back there, the servers are shut down.) Every server has a sticker with a code that identifies its exact address, useful if something goes wrong. The servers have thick black batteries alongside. Everything is uniform and in place—nothing like the spaghetti tangles of Google’s long-ago Exodus era.

Blue lights twinkle, indicating … what? A web search? Someone’s Gmail message? A Glass calendar event floating in front of Sergey’s eyeball? It could be anything.

Every so often a worker appears—a long-haired dude in shorts propelling himself by scooter, or a woman in a T-shirt who’s pushing a cart with a laptop on top and dispensing repair parts to servers like a psychiatric nurse handing out meds. (In fact, the area on the floor that holds the replacement gear is called the pharmacy.)

How many servers does Google employ? It’s a question that has dogged observers since the company built its first data center. It has long stuck to “hundreds of thousands.” (There are 49,923 operating in the Lenoir facility on the day of my visit.) I will later come across a clue when I get a peek inside Google’s data center R&D facility in Mountain View. In a secure area, there’s a row of motherboards fixed to the wall, an honor roll of generations of Google’s homebrewed servers. One sits atop a tiny embossed plaque that reads JULY 9, 2008. GOOGLE’S MILLIONTH SERVER. But executives explain that this is a cumulative number, not necessarily an indication that Google has a million servers in operation at once.

Wandering the cold aisles of Lenoir, I realize that the magic number, if it is even obtainable, is basically meaningless. Today’s machines, with multicore processors and other advances, have many times the power and utility of earlier versions. A single Google server circa 2012 may be the equivalent of 20 servers from a previous generation. In any case, Google thinks in terms of clusters—huge numbers of machines that act together to provide a service or run an application. “An individual server means nothing,” Hölzle says. “We track computer power as an abstract metric.” It’s the realization of a concept Hölzle and Barroso spelled out three years ago: the data center as a computer.

As we leave the floor, I feel almost levitated by my peek inside Google’s inner sanctum. But a few weeks later, back at the Googleplex in Mountain View, I realize that my epiphanies have limited shelf life. Google’s intention is to render the data center I visited obsolete. “Once our people get used to our 2013 buildings and clusters,” Hölzle says, “they’re going to complain about the current ones.”

Asked in what areas one might expect change, Hölzle mentions data center and cluster design, speed of deployment, and flexibility. Then he stops short. “This is one thing I can’t talk about,” he says, a smile cracking his bearded visage, “because we’ve spent our own blood, sweat, and tears. I want others to spend their own blood, sweat, and tears making the same discoveries.” Google may be dedicated to providing access to all the world’s data, but some information it’s still keeping to itself.

Senior writer Steven Levy (steven_levy@wired.cominterviewed Mary Meeker in issue 20.10

Data center locations

Now Ask Siri To Open Your Garage Door By Using Raspberry Pi

If Doorbot was an interesting proposition for your front door here is an interesting innovation for your garage door that utilizes Raspberry Pi and Siri. A Raspberry Pi user who goes by the name “DarkTherapy” has posted his project on the Raspberry Pi forum showing how to modify this micro Linux PC to open automatic garage doors by voice commands through Siri. Have a look at the video below that shows the user opening the garage door by his iPhone:

In order to make this voice activated garage door opener you need the following things:

  • SiriProxy running on the Raspberry Pi for enabling custom commands on the Siri app
  • wiringPI for connecting the PC’s General Purpose Input/Output (GPIO) pins to the garage door relay
  • Local Wi-Fi for establishing the network between the devices

Once you have made sure that the Raspberry Pi is able to receive the SiriProxy commands, some coding changes are required in the SiriProxy server. For interested users, the custom codes have been put up in this post. This innovation also makes you think if this trick can be applied on other devices that are compatible with the Raspberry Pi GPIO pins and that could lead to some more interesting innovations.

Now Ask Siri To Open Your Garage Door By Using Raspberry Pi

Source: via Raspberry Pi Forum

International Linear Collider (ILC) : Japanese Engineers Want To Collide Electrons and Positrons

International Linear Collider (ILC) : Japanese Engineers Want To Collide Electrons and Positrons

MadLab Industries Present Hexapod With Hexcopter – The Killer Robo Combo!

MadLab Industries group hasn’t failed to impress us with its all new quadcopter (acutally a hexcopter) combined with a hexapod, that does the awesome and looks the awesome. Using the PhantomX Hexapod kit from Trossen Robotics, the Mad Lab’s team consisting of Don Miller, Jason Penick, and Jason Williams &  Trossen Robotics’ Andrew Alter, reduced the hexapod’s weight and gave it reasons, sorry things to fly high. They replaced the hexapod’s original ABS plates with carbon fiber to bring down the weight significantly. And devised a six-rotor setup so as to increase the stability of the unit in the air. This ultimate robo combo can crawl the ground and fly into the air like an efficient flying insect.


The other major components that make up this hexapodcopter are – a Hoverfly Pro flight controller, six E-Flite Power 15 motors and six E-Flite 40-amp ESC (Electronic Speed Controls). Even though the controls for the both hexapod and hexcopter are still separate, the team plans to merge them up soon. We think that a catch & release mechanism that lets the controller detach and reattach both robos would be an awesome feature to have. And the team says that they are already working on it. How cool is that?


Here we have the first video from the MadLab Industries team where their robo is able to walk around, and talk off, land and continue walking. Check it out –

And here’s another video where you see it in acton while showing off the movement of the legs, walking around and taking off while walking, landing while walking, and by user request – picking stuff up with its legs! You don’t want to miss this either –

KADE – Connects arcade controls to computers and consoles

Easily connect arcade controls to your favourite consoles and computers with our ground breaking open source device

KADE is an open source arcade interface which has been designed to make it easy to connect arcade controls to your favourite consoles and computers. This is achieved using a ground breaking combination of open source firmware, open hardware, and a free software loader.

KADE already has support for systems including:

  • USB/HID Joystick (Windows, Linux, Mac, Android)
  • USB/HID Keyboard (Windows, Linux, Mac, Android)
  • Playstation 1
  • Playstation 2
  • Playstation 3
  • Original Xbox
  • MAME and Pinball

Plus there are additional systems supported when KADE is coupled with a low cost adaptor. Systems including Xbox 360, Gamecube, Wii, Dreamcast and many more.

We are calling for the community of arcade enthusiasts, stick builders and retro gamers to embrace KADE and we encourage them to get involved and contribute their ideas.

We hope that the open source nature of our device will accelerate the development process and help us to introduce new features and fast-track support for other systems and products.  We already have a beta tester using the open hardware to build his own hardware features into a custom board.


KADE was born as a solution to the various problems that we had faced when interfacing arcade controls to our own projects.

Kevin has built many arcade cabinets and before KADE, he relied on soldering pad hacks to wire the controls to a wide range of consoles.  He didn’t like that most DIY arcade builders he was helping were not able to predictably solder gamepads creating a barrier to using consoles in cabinets.  Jon worked on several interface solutions before KADE including various controller and adaptor hacks as well as an AVR keyboard encoder – a predecessor to KADE.  These solutions were all very limited in scope.  Bruno built the open source open hardware RetroPad Adapter and was happy to collaborate with Jon and Kevin in the spirit of open hardware and software.

These earlier attempts at developing an interface were frustrating but very necessary, it was a great learning curve for us.  Kevin adapted his Architecture based CAD skills to PCB board design using Eagle (with Bruno’s guidance),  Jon learned how to develop AVR/GCC with microcontrollers and Bruno with his sound knowledge of retro consoles and track record of makingretro gamepad adapters is now applying his skills to KADE.

We are launching a project, not just a product.

As a team, and with the support of the community, we see no limit to what can be achieved.  If we can get just a bit of momentum, we will work with the community to create more Open Source arcade solutions for all sorts of gaming needs.  We built the KADE to do what we want as well as what we think the community wants.  We are looking forward to adding features as a broader group of folks are using it.

You can rest assured that KickStarter funds will be invested straight back into the project.

“The main goal of my involvement in the project is to make it so we can support buying new hardware and software to continue to push Open Source arcade and emulation hardware.” – Kevin

USB Enabled AVR Microcontroller

The brains of KADE is a USB enabled AVR.  For the initial launch we have decided to use a Minimus AVR which is powered by an Atmel micro-controller (either AT90USB162 or ATMEGA32U2).

There is scope to use other development platforms too!  We’ve already beta tested Loader support for Arduino and the Arduino Pro Mini boards.  Depending on feedback, we may develop firmware support for Arduino in the future.

Arcade PCB

The arcade PCB has been designed as a breakout board to make wiring easy with the standard sized screw terminals. The AVR itself can be easily unplugged from the Arcade PCB for programming after the device is fitted into an arcade cabinet or fight stick.

Early KADE Arcade PCB prototype (older design)

Early KADE Arcade PCB prototype (older design)

The KADE hardware is open source too. The schematics and build instructions will be made available for DIY’ers who want to build their own devices. Kits will also be available for those who need a little help to get started.

You’ll be surprised to hear that there are some early adopters of KADE technology who are already developing their own extension boards to integrate KADE with external peripherals and enabling remote control operation.


We have developed a collection of firmwares that can be programmed to the KADE device to make it work with various systems.

All the firmwares will be made open source. If you know how to adapt the code and compile, you can make changes as you see fit as long as you follow the open source licensing.  Firmware will be released under the terms of the GNU General Public License as published by the Free Software Foundation.

Loader and Customiser

KADE Loader is free software that allows you to easily load any of the KADE firmwares onto the AVR via USB.  Simply select a firmware and load it to the KADE device with one button press!  The loader is free but is not open sourced. It is written for Windows and works fine on VMWare running Windows on Mac OSX.

KADE Loader has extensive customisation options that put you firmly in control. You get to choose the system (from those supported) and the functions that suit your specific project.

More advanced users may wish to customise the KADE to work with their control panel.  This is easy to do, just pick a function, from those available, and map it to one of the available pins on the KADE.

The KADE has 20 pins (labelled A1-A10 and B1-B10) and each can be assigned to an input function.  There is also a shift pin (labelled HWB).  When a button is used with the shift pin it activates a shifted mode and a further 20 pins giving a total of 40 customisable inputs.

Early KADE Arcade PCB prototype (older design)

Early KADE Arcade PCB prototype (older design)

KADE supports all of the regular controller inputs and also introduces some of its own functions that you would not find on the original controller.
Here is a list of the functions currently provided for Xbox. Expect similar functions on the other systems. Each of these functions can be assigned to an input on the KADE encoder.

1) D-Pad Up
2) D-Pad Down
3) D-Pad Left
4) D-Pad Right
5) A Button
6) B Button
7) X Button
8) Y Button
9) Left Trigger
10) Right Trigger
11) Black Button
12) White Button
13) Start Button
14) Back Button
15) Left Thumb Button
16) Right Thumb Button
17) Left Analog Stick – Up
18) Left Analog Stick – Down
19) Left Analog Stick – Left
20) Left Analog Stick – Right
21) Right Analog Stick – Up
22) Right Analog Stick – Down
23) Right Analog Stick – Left
24) Right Analog Stick – Right
25) Exit Game (Combination of Start and Back Buttons)
26) Exit to Dashboard (Combination of Triggers, Back and Black)
27) Invert Y Axis of Analog Sticks
28) D-Pad Restrict to 4-Way Operation
29) D-Pad Restrict to 2-Way Horizontal
30) D-Pad Restrict to 2-Way Vertical
31) Auto Fire – A Button
32) Auto Fire – B Button
33) Auto Fire – X Button
34) Auto Fire – Y Button
35) Connect external LED to this pin
36) Connect external +5V peripheral to this pin (e.g. relay)
37) Put KADE in program mode for firmware update

*Plus there are lots of other combo and emu specific functions that are not listed,  and more functions being developed as you are reading this!

KADE Loader has auto-update built in so you will benefit from new features and additional firmwares automatically as soon as we make them available.

We have an excellent compliment of skills and work together really well,  despite being thousands of miles apart and working in different timezones!
We would love to meet and share a beer someday.

Jon Wilson (UK)

Jon is a software guy with over 20 years programming experience and he likes to make electronic things. Prior to KADE he spent much of his spare time building arcade machines.

Jon has previously engaged with the arcade community to develop a USB multi-function keyboard encoder, built with similar technology to KADE.

Jon produced a video guide to building an arcade cabinet.

Bruno Freitas (Brazil)

Bruno is a software engineer who recently discovered the beauty of micro-controllers. He’s an avid retro gamer and he’s about to have an old dream come true: having his own arcade machine. It will be powered by KADE, of course!

Bruno is an open source and open hardware enthusiast. Among his works, the ones which most stand out are:

  • Wii RetroPad Adapter – An old controllers adapter for the Nintendo Wii
  • USB RetroPad Adapter – An old controllers adapter for PCs and PS3
  • RetroVGA – A VGA scanline generator

Bruno is also very proud of being part of the KADE Encoder core development team.

Kevin Mackett (US)

Kevin has over 20 years experience in software development and educational technology. He grew up playing in arcades and build his first home arcade cabinet back in the 90s.

Kevin enjoys supporting DIY home arcade projects by sharing his arcade control, cabinet, and emulation knowledge especially when the xBox and coinops are involved. Prior to KADE he spent way too much time doing pad hacks getting his xBox, x360, dreamcast, PS2, PC, Android phone/tablet, and Mac hooked up to arcade controls.

RISKS AND CHALLENGESLearn about accountability on Kickstarter

Hardware- Parts
Most of the parts used in the KADE are readily available from multiple sources. We have already purchased enough AVR’s to cover a complete reward sell out.

Hardware- PCB fabrication
We have settled on for our fabrication house because doing larger orders helps support the small order business that makes PCB fabrication affordable. If OSH Park has trouble manufacturing the PCB’s we have three other fabrication options to choose from.

Hardware- assembly.
Jon, Bruno, or Kevin will personally solder your Kickstarter reward KADE. Since each of us has experience doing this kind of work, if one of us has assembly issues, the others are prepared to pick up the slack.

KADE has been in the works for a few months and the software and hardware has gone through a number of revisions and testing cycles. While every feature we plan on adding to the KADE hasn’t yet been implemented, backers who choose a KADE for a reward will be receiving a completed product that will likely have additional features added after the release!

Jon, Bruno, and Kevin bring a combined experience in hardware and software development as well as project and product management that is well suited to bring a project like KADE together and deliver rewards on time. We have done similar scale hardware development including gaming related devices as well as and large scale software development. Together, We are confident We will be able to complete the Kickstarter on time and within budget.




There is an urgent need for improving security in banking region. With the advent of ATM though banking became a lot easier it even became a lot vulnerable. The chances of misuse of this much hyped ‘insecure’ baby product (ATM) are manifold due to the exponential growth of ‘intelligent’ criminals day by day. ATM systems today use no more than an access card and PIN for identity verification. This situation is unfortunate since tremendous progress has been made in biometric identification techniques, including finger printing, facial recognition, and iris scanning.This paper proposes the development of a system that integrates Facial regognition and Iris scanning technology into the identity verification process used in ATMs. The development of such a system would serve to protect consumers and financial institutions alike from fraud and other breaches of security.


The rise of technology in India has brought into force many types of equipment that aim at more customer satisfaction. ATM is one such machine which made money transactions easy for customers to bank. The other side of this improvement is the enhancement of the culprit’s probability to get his ‘unauthentic’ share. Traditionally, security is handled by requiring the combination of a physical access card and a PIN or other password in order to access a customer’s account. This model invites fraudulent attempts through stolen cards, badly-chosen or automatically assigned PINs, cards with little or no encryption schemes, employees with access to non-encrypted customer account information and other points of failure.

Our paper proposes an automatic teller machine security model that would combine a physical access card, a PIN, and electronic facial recognition. By forcing the ATM to match a live image of a customer’s face with an image stored in a bank database that is associated with the account number, the damage to be caused by stolen cards and PINs is effectively neutralized. Only when the PIN matches the account and the live image and stored image match would a user be considered fully verified. A system can examine just the eyes, or the eyes nose and mouth, or ears, nose, mouth and eyebrows, and so on.

In this paper , we will also look into an automatic teller machine security model providing the customers a cardless, password-free way to get their money out of an ATM.  Just step up to the camera while your eye is scanned. The iris — the colored part of the eye the camera will be checking — is unique to every person, more so than fingerprints.


Our ATM system would only attempt to match two (and later, a few) discrete images, searching through a large database of possible matching candidates would be unnecessary. The process would effectively become an exercise in pattern matching, which would not require a great deal of time. With appropriate lighting and robust learning software, slight variations could be accounted for in most cases. Further, a positive visual match would cause the live image to be stored in the database so that future transactions would have a broader base from which to compare if the original account image fails to provide a match – thereby decreasing false negatives.

When a match is made with the PIN but not the images, the bank could limit transactions in a manner agreed upon by the customer when the account was opened, and could store the image of the user for later examination by bank officials. In regards to bank employees gaining access to customer PINs for use in fraudulent transactions, this system would likewise reduce that threat to exposure to the low limit imposed by the bank and agreed to by the customer on visually unverifiable transactions.

In the case of credit card use at ATMs, such a verification system would not currently be feasible without creating an overhaul for the entire credit card issuing industry, but it is possible that positive results (read: significant fraud reduction) achieved by this system might motivate such an overhaul.

The last consideration is that consumers may be wary of the privacy concerns raised by maintaining images of customers in a bank database, encrypted or otherwise, due to possible hacking attempts or employee misuse. However, one could argue that having the image compromised by a third party would have far less dire consequences than the account information itself. Furthermore, since nearly all ATMs videotape customers engaging in transactions, it is no broad leap to realize that banks already build an archive of their customer images, even if they are not necessarily grouped with account info

Hardware and softwar

ATMs contain secure cryptoprocessors, generally within an IBM PC compatible host computer in a secure enclosure. The security of the machine relies mostly on the integrity of the secure cryptoprocessor: the host software often runs on a commodity operating system.In-store ATMs typically connect directly to their ATM Transaction Processor via a modem over a dedicated telephone line, although the move towards Internet connections is under way.

In addition, ATMs are moving away from custom circuit boards (most of which are based on Intel 8086 architecture) and into full-fledged PCs with commodity operating systems such as Windows 2000 and Linux. An example of this is Banrisul, the  largest bank in the South of Brazil, which has replaced the MS-DOS operating systems in its automatic teller machines with Linux. Other platforms include RMX 86, OS/2 and Windows 98 bundled with Java. The newest ATMs use Windows XP or Windows XP embedded.


ATMs are generally reliable, but if they do go wrong customers will be left without cash until the following morning or whenever they can get to the bank during opening hours. Of course, not all errors are to the detriment of customers; there have been cases of machines giving out money without debiting the account, or giving out higher value notes as a result of incorrect denomination of banknote being loaded in the money cassettes. Errors that can occur may be mechanical (such as card transport mechanisms; keypads; hard disk failures); software (such as operating system; device driver; application); communications; or purely down to operator error.


such as malls, grocery stores, and restaurants. The other side of this improvement is the enhancement of the culprit’s probability to get his ‘unauthentic’ share.  ATMs are           Early ATM security focused on making the ATMs invulnerable to physical attack; they were effectively safes with dispenser mechanisms. ATMs are placed not only near banks, but also in locations a quick and convenient way to get cash. They are also public and visible, so it pays to be careful when you’re making transactions. Follow these general tips for your personal safety.

Stay alert.

If an ATM is housed in an enclosed area, shut the entry door completely behind you. If you drive up to an ATM, keep your car doors locked and an eye on your surroundings. If you feel uneasy or sense something may be wrong while you’re at an ATM, particularly at night or when you’re alone, leave the area.

Keep you PIN confidential.

Memorize your Personal Identification Number (PIN); don’t write it on your card or leave it in your wallet or purse. Keep your number to yourself.  Never provide your PIN over the telephone, even if a caller identifies himself as a bank employee or police officer. Neither person would call you to obtain your number.

Conduct transactions in private.

Stay squarely in front of the ATM when completing your transaction so people waiting behind you won’t have an opportunity to see your PIN being entered or to view any account information. Similarly, fill out your deposit/withdrawal slips privately.

Don’t flash your cash.

If you must count your money, do it at the ATM, and place your cash into your wallet or purse before stepping away. Avoid making excessively large withdrawals. If you think you’re being followed as you leave the ATM, go to a public area near other people and, if necessary, ask for help.

Save receipt.

Your ATM receipts provide a record of your transactions that you can later reconcile with your monthly bank statement. If you notice any discrepancies on your statement, contact your bank as soon as possible. Leaving receipts at an ATM can also let others know how much money you’ve withdrawn and how much you have in your account.

Guard your card.

Don’t lend your card or provide your PIN to others, or discuss your bank account with friendly strangers. If your card is lost or stolen, contact your bank immediately.

Immediately report any crime to the police.

Contact the Department Of Public Security or your local police station for more personal safety information.


The main issues faced in developing such a model are keeping the time elapsed in the verification process to a negligible amount, allowing for an appropriate level of variation in a customer’s face when compared to the database image, and that credit cards which can be used at ATMs to withdraw funds are generally issued by institutions that do not have in-person contact with the customer, and hence no opportunity to acquire a photo.

Because the system would only attempt to match two (and later, a few) discrete images, searching through a large database of possible matching candidates would be unnecessary. The process would effectively become an exercise in pattern matching, which would not require a great deal of time. With appropriate lighting and robust learning software, slight variations could be accounted for in most cases. Further, a positive visual match would cause the live image to be stored in the database so that future transactions would have a broader base from which to compare if the original account image fails to provide a match – thereby decreasing false negatives.

When a match is made with the PIN but not the images, the bank could limit transactions in a manner agreed upon by the customer when the account was opened, and could store the image of the user for later examination by bank officials. In regards to bank employees gaining access to customer PINs for use in fraudulent transactions, this system would likewise reduce that threat to exposure to the low limit imposed by the bank and agreed to by the customer on visually unverifiable transactions.

In the case of credit card use at ATMs, such a verification system would not currently be feasible without creating an overhaul for the entire credit card issuing industry, but it is possible that positive results (read: significant fraud reduction) achieved by this system might motivate such an overhaul.

The last consideration is that consumers may be wary of the privacy concerns raised by maintaining images of customers in a bank database, encrypted or otherwise, due to possible hacking attempts or employee misuse. However, one could argue that having the image compromised by a third party would have far less dire consequences than the account information itself. Furthermore, since nearly all ATMs videotape customers engaging in transactions, it is no broad leap to realize that banks already build an archive of their customer images, even if they are not necessarily grouped with account information.


For most of the past ten years, the majority of ATMs used worldwide ran under IBM’s now-defunct OS/2. However, IBM hasn’t issued a major update to the operating system in over six years. Movement in the banking world is now going in two directions: Windows and Linux. NCR, a leading world-wide ATM manufacturer, recently announced an agreement to use Windows XP Embedded in its next generation of personalized ATMs ( Windows XP Embedded allows OEMs to pick and choose from the thousands of components that make up Windows XP Professional, including integrated multimedia, networking and database management functionality. This makes the use of off-the-shelf facial recognition code more desirable because it could easily be compiled for the Windows XP environment and the networking and database tools will already be in place.

Many financial institutions are relying on Windows NT, because of its stability and maturity as a platform.The ATMs send database requests to bank servers which do the bulk of transaction processing ( This model would also work well for the proposed system if the ATMs processors were not powerful enough to quickly perform the facial recognition algorithms.



There are hundreds of proposed and actual implementations of facial recognition technology from all manner of vendors for all manner of uses. However, for the model proposed in this paper, we are interested only in the process of facial verification – matching a live image to a predefined image to verify a claim of identity – not in the process of facial evaluation – matching a live image to any image in a database. Further, the environmental conditions under which the verification takes place – the lighting, the imaging system, the image profile, and the processing environment – would all be controlled within certain narrow limits, making hugely robust software unnecessary .One leading facial recognition algorithm class is called image template based. This method attempts to capture global features of facial images into facial templates. What must be taken into account, though, are certain key factors that may change across live images: illumination, expression, and pose (profile.)

The natural conclusion to draw, then, is to take a frontal image for the bank database, and to provide a prompt to the user, verbal or otherwise, to face the camera directly when the ATM verification process is to begin, so as to avoid the need to account for profile changes. With this and other accommodations, recognition rates for verification can rise above 90%. A system can examine just the eyes, or the eyes nose and mouth, or ears, nose, mouth and eyebrows, and so on

The conclusion to be drawn for this project, then, is that facial verification software is currently up to the task of providing high match rates for use in ATM transactions. What remains is to find an appropriate open-source local feature analysis facial verification program that can be used on a variety of platforms, including embedded processors, and to determine behavior protocols for the match / non-match cases.

Our methodlogy

The first and most important step of this project will be to locate a powerful open-source facial recognition program that uses local feature analysis and that is targeted at facial verification. This program should be compilable on multiple systems, including Linux and Windows variants, and should be customizable to the extent of allowing for variations in processing power of the machines onto which it would be deployed.We will then need to familiarize ourselves with the internal workings of the program so that we can learn its strengths and limitations. Simple testing of this program will also need to occur so that we could evaluate its effectiveness. Several sample images will be taken of several individuals to be used as test cases – one each for “account” images, and several each for “live” images, each of which would vary pose, lighting conditions, and expressions.

Once a final program is chosen, we will develop a simple ATM black box program. This program will server as the theoretical ATM with which the facial recognition software will interact. It will take in a name and password, and then look in a folder for an image that is associated with that name. It will then take in an image from a separate folder of “live” images and use the facial recognition program to generate a match level between the two. Finally it will use the match level to decide whether or not to allow “access”, at which point it will terminate. All of this will be necessary, of course, because we will not have access to an actual ATM or its software.

Both pieces of software will be compiled and run on a Windows XP and a Linux system. Once they are both functioning properly, they will be tweaked as much as possible to increase performance (decreasing the time spent matching) and to decrease memory footprint.

Following that, the black boxes will be broken into two components – a server and a client – to be used in a two-machine network. The client code will act as a user interface, passing all input data to the server code, which will handle the calls to the facial recognition software, further reducing the memory footprint and processor load required on the client end. In this sense, the thin client architecture of many ATMs will be emulated.

We will then investigate the process of using the black box program to control a USB camera attached to the computer to avoid the use of the folder of “live” images. Lastly, it may be possible to add some sort of DES encryption to the client end to encrypt the input data and decrypt the output data from the server – knowing that this will increase the processor load, but better allowing us to gauge the time it takes to process.


Inspite of all these security features, a new technology has been developed. Bank United of Texas became the first in the United States to offer iris recognition technology at automatic teller machines, providing the customers a cardless, password-free way to get their money out of an ATM. There’s no card to show, there’s no fingers to ink, no customer inconvenience or discomfort. It’s just a photograph of a Bank United customer’s eyes. Just step up to the camera while your eye is scanned. The iris — the colored part of the eye the camera will be checking — is unique to every person, more so than fingerprints. And, for the customers who can’t remember their personal identification number or password and scratch it on the back of their cards or somewhere that a potential thief can find, no more fear of having an account cleaned out if the card is lost or stolen.


How the system works.                                                                                                                 When a customer puts in a bankcard, a stereo camera locates the face, finds the eye and takes a digital image of the iris at a distance of up to three feet. The resulting computerized “iris code” is compared with one the customer will initially provide the bank. The ATM won’t work if the two codes don’t match. The entire process takes less than two seconds.

The system works equally well with customers wearing glasses or contact lenses and at night. No special lighting is needed. The camera also does not use any kind of beam. Instead, a special lens has been developed that will not only blow up the image of the iris, but provide more detail when it does. Iris scans are much more accurate than other high-tech ID systems available that scan voices, faces and fingerprints.

Scientists have identified 250 features unique to each person’s iris — compared with about 40 for fingerprints — and it remains constant through a person’s life, unlike a voice or a face. Fingerprint and hand patterns can be changed through alteration or injury. The iris is the best part of the eye to use as a identifier because there are no known diseases of the iris and eye surgery is not performed on the iris. Iris identification is the most secure, robust and stable form of identification known to man. It is far safer, faster, more secure and accurate than DNA testing. Even identical twins do not have identical irises. The iris remains the same from 18 months after birth until five minutes after death.

When the system is fully operational, a bank customer will have an iris record made for comparison when an account is opened. The bank will have the option of identifying either the left or right eye or both. It requires no intervention by the customer. They will simply get a letter telling them they no longer have to use the PIN number. And, scam artists beware, a picture of the card holder won’t pass muster. The first thing the camera will check is whether the eye is pulsating. If we don’t see blood flowing through your eye, you’re either dead or it’s a picture.




We thus develop an ATM model that is more reliable in providing security by using facial recognition software. By keeping the time elapsed in the verification process to a negligible amount we even try to maintain the efficiency of this ATM system to a greater degree. One could argue that having the image compromised by a third party would have far less dire consequences than the account information itself. Furthermore, since nearly all ATMs videotape customers engaging in transactions, it is no broad leap to realize that banks already build an archive of their customer images, even if they are not necessarily grouped with account information.

Puzzlebox Orbit: Brain-Controlled Helicopter

Puzzlebox Orbit is an educational toy that combines               
a brain-controlled helicopter with open hardware,                  
software, and teaching material.
For the past two years Puzzlebox has been producing
brain-controlled helicopters for classrooms and television.
Now comes the chance to fly your own.

The Purpose

Join the experiment.

We are building and selling this crazy new toy. Then we show everyone how we made it. We will sell finished, working, brain-controlled helicopters but also release guides and software for taking them apart to rebuild or customize. We will publish lessons on how mind-controlled devices actually work and how infrared signals steer the aircraft. We are testing a hypothesis that this form of cooperation can succeed commercially while aiding the pursuit of science and education.

Our overall goal is to explore an Open approach to Brain-Computer Interface (BCI) technology. Advances at the cutting edge are waiting to find their way to the public and this project is our latest contribution. If our funding is successful all material including source code, hardware schematics, and documentation will be freely distributed.

Then we start the next experiment.

The Product

Puzzlebox Orbit features a unique spherical design that protects helicopter blades from unintended impact with objects such as walls and ceilings, while lending a pleasantly technical aesthetic. Despite remote control helicopters in general having earned a reputation for being fragile we have been extremely pleased with the build quality and resilience of our samples. They have survived several falls and collisions over the course of development and testing without noticeable damage.

We offer two models, the first designed to be used with mobile devices such as tablets and smartphones. A NeuroSky MindWave Mobile EEG headset is required to communicate with the device over Bluetooth. Our software then extracts and visualizes your brainwaves in realtime. Command signals are issued to the Puzzlebox Orbit via an infrared adapter connected to the audio port (for compatibility with Apple’s iOS).

Puzzlebox Pyramid (Prototype)
Puzzlebox Pyramid (Prototype)

Puzzlebox Pyramid is supplied with our second, self-contained model. The Pyramid acts as a home base and remote control unit for the Orbit. It features a custom-designed, programmable micro-controller compatible with popular boards from Arduino. Twelve multi-colored LED lights are arranged according to clock positions on the face of the Pyramid and are used to indicate current levels of concentration, mental relaxation, and EEG signal quality. The lights can be customized to display different colors and patterns with distinct meanings according to preference. Lining the rim are several infrared LEDs that operate the helicopter and with software programming are capable of controlling additional IR toys and devices including televisions.

With either edition the user can select a “flight path” for the helicopter (such as “hover in place” or “fly across the room”) to be carried out whenever a targetted personal mental state is detected and maintained. Third-party developers are able and encouraged to contribute new features and modes of flight control.

Puzzlebox Orbit relies on EEG hardware from NeuroSky to produce measurements of attention and meditation. Leveraging their hardware plus our proven track record with BCI has yielded a much faster and smoother time to market, empowering us to focus on building the best possible product and software. We offer rewards to backers both with and without pre-packaged headsets included.

Puzzlebox Orbits and Pyramids
Puzzlebox Orbits and Pyramids

The Process

Our hardware engineer returned to China at the beginning of November to oversee our manufacturing process. We have confirmed a readily-available supply of Orbit helicopters, infrared dongles for our mobile edition, and NeuroSky headsets sufficient to fill all orders for our December reward tier. Delivery to backers is expected in time for the holidays along with feature-complete beta software made available for testing on a variety of handsets and devices.

For Puzzlebox Pyramid we will use either SLA 3D printing or injection molding, based on demand. We have contacted several manufacturers who can form the mold, including a major provider in China. Our custom circuit board for Pyramid is still under design, but will be finished this month. Hao (our hardware engineer) is a Chinese national, having maintained connections to PCB manufacturers in Southern China since his early career. We have a good relationship with a significant electronic components provider as well as top factories to produce and solder PCBs. Normally it is hard to find a factory willing to train workers to assemble products numbering only in the hundreds but we have solid ties with several consumer electronics manufacturers (including one factory renowned for producing brand-name housewares such as hair dryers) and they will assemble and package Pyramids for us, regardless of number of units. Local knowledge, language, and relationships play a key role here.

Finally we have arranged for receipt, repackaging, and domestic shipping of complete systems for the US and (soon) Canada. Dependent upon response, as a flex goal we would add additional reward tiers with Orbits, Pyramids, EEG headsets, and mobile device controllers to be shipped in subsequent months. International shipping will become available in (northern) Spring 2013.

The Result

If successful our project will publish all software, protocols, and available hardware schematics under Open Source (and/or Creative Commons) licenses. We are willing to risk sharing our intellectual property in this way because we believe it is the best way to grow our community and to increase knowledge in the field.

Finally if fully funded we will produce videos and illustrated documentationexplaining how the various neuroscience principles and technologies involved actually operate (including EEG and infrared transmitters). Because as cool as it might be to fly a helicopter with your brain, its cooler understanding how it all works.

Puzzlebox Brainstorms in the classroom
Puzzlebox Brainstorms in the classroom

We envision the Puzzlebox Orbit being used for entertainment, personal training of mental focus or relaxation, and as an aid to teaching science and technology from middle school through to university level. Basic principles should be understandable by a motivated 10-year-old. Any interested high school or college student should be able to access and extend our software and designs.


Q: What’s the development status?

A: Our first prototype is complete. Our brain-control software communicates with the headset under Mac OS X, Linux, Windows and Android. We have built and tested several Puzzlebox Pyramid base stations with helicopter control. A fully operational Brain-Computer Interface release of our software for mobile devices will be ready by December.

Q: Which EEG headset do you use?

A: We have selected the NeuroSky MindWave for use with this product. NeuroSky has been an excellent partner on past endeavors and we really enjoy working with their team. For the Pyramid version of Puzzlebox Orbit we require the original MindWave, which includes a USB RF dongle to wirelessly community with the EEG headset. The mobile device version of Puzzlebox Orbit is designed for the MindWave Mobile, the Bluetooth equivalent. Because we use an official MindWave headset, existing owners do not need to purchase a new headset and backers are free to explore NeuroSky’s onlinestore for more brainwave-based games and applications.
Q: What files are you going to release under open licenses?

A: The source code of Linux/Mac/PC software and mobile apps will be published as soon as the project is funded. PCB schematics and layouts, firmware, and 3D model of the Pyramid will be sent to each backer when his/her reward is shipped. When all rewards have shipped their final designs will be considered complete and released freely.
Q: What components does Puzzlebox produce directly?

A: The Puzzlebox Pyramid is a completely custom hardware module designed for controlling Puzzlebox Orbit and compatible future devices. Our hardware engineer returned to China in early November to oversee manufacturing (see above). For the Orbit helicopter and mobile device’s IR adapter we have researched and carefully selected multiple supply chains. The EEG headset is supplied by NeuroSky.

RISKS AND CHALLENGESLearn about accountability on Kickstarter

We’re anxious to explore and explain both the possibilities and limitations of EEG technology with consumer-grade hardware, and believe we have to be clear and honest when it comes to setting expectations. We are aware that misrepresentation could set back public perception of this industry for years.

When using the current Brain-Computer Interface, it will not be possible to steer the Puzzlebox Orbit in more than one direction at a time. With practice a user should be able to improve their ability to concentrate (or alternatively, relax). This effects the duration they can maintain flight as well as response time at take-off. But the science simply does not support being able to distinguish between multiple “intentions” with this quantity or placement of electrodes.

A more specific technical challenge will be to find the “sweet spot” flight settings at which the helicopter can hover as still as possible under a variety of room sizes and conditions, or fly in a straight line for long distances. We will likely have to find reasonable compromises and offer customizable trim settings to the user.

By way of disclosure at this stage we have not yet begun software development for the iOS edition but can confirm that our IR hardware is compatible. Initial releases may arrive as source code only, until our finished application has been approved by Apple for distribution in the App Store. This should be considered by backers seeking a December reward package.

Henry Ford began production of the Model T automobile.

In 1908 Henry Ford began production of the Model T automobile. Based on his original Model A design first manufactured in 1903, the Model T took five years to develop. Its creation inaugurated what we know today as the mass production assembly line. This revolutionary idea was based on the concept of simply assembling interchangeable component parts. Prior to this time, coaches and buggies had been hand-built in small numbers by specialized craftspeople who rarely duplicated any particular unit. Ford’s innovative design reduced the number of parts needed as well as the number of skilled fitters who had always formed the bulk of the assembly operation, giving Ford a tremendous advantage over his competition.

This slideshow requires JavaScript.

Ford’s first venture into automobile assembly with the Model A involved setting up assembly stands on which the whole vehicle was built, usually by a single assembler who fit an entire section of the car together in one place. This person performed the same activity over and over at his stationary assembly stand. To provide for more efficiency, Ford had parts delivered as needed to each work station. In this way each assembly fitter took about 8.5 hours to complete his assembly task. By the time the Model T was being developed Ford had decided to use multiple assembly stands with assemblers moving from stand to stand, each performing a specific function. This process reduced the assembly time for each fitter from 8.5 hours to a mere 2.5 minutes by rendering each worker completely familiar with a specific task.

Ford soon recognized that walking from stand to stand wasted time and created jam-ups in the production process as faster workers overtook slower ones. In Detroit in 1913, he solved this problem by introducing the first moving assembly line, a conveyor that moved the vehicle past a stationary assembler. By eliminating the need for workers to move between stations, Ford cut the assembly task for each worker from 2.5 minutes to just under 2 minutes; the moving assembly conveyor could now pace the stationary worker. The first conveyor line consisted of metal strips to which the vehicle’s wheels were attached. The metal strips were attached to a belt that rolled the length of the factory and then, beneath the floor, returned to the beginning area. This reduction in the amount of human effort required to assemble an automobile caught the attention of automobile assemblers throughout the world. Ford’s mass production drove the automobile industry for nearly five decades and was eventually adopted by almost every other industrial manufacturer. Although technological advancements have enabled many improvements to modern day automobile assembly operations, the basic concept of stationary workers installing parts on a vehicle as it passes their work stations has not changed drastically over the years.

Raw Materials

Although the bulk of an automobile is virgin steel, petroleum-based products (plastics and vinyls) have come to represent an increasingly large percentage of automotive components. The light-weight materials derived from petroleum have helped to lighten some models by as much as thirty percent. As the price of fossil fuels continues to rise, the preference for lighter, more fuel efficient vehicles will become more pronounced.


Introducing a new model of automobile generally takes three to five years from inception to assembly. Ideas for new models are developed to respond to unmet pubic needs and preferences. Trying to predict what the public will want to drive in five years is no small feat, yet automobile companies have successfully designed automobiles that fit public tastes. With the help of computer-aided design equipment, designers develop basic concept drawings that help them visualize the proposed vehicle’s appearance. Based on this simulation, they then construct clay models that can be studied by styling experts familiar with what the public is likely to accept. Aerodynamic engineers also review the models, studying air-flow parameters and doing feasibility studies on crash tests. Only after all models have been reviewed and accepted are tool designers permitted to begin building the tools that will manufacture the component parts of the new model.

The Manufacturing 


  • 1 The automobile assembly plant represents only the final phase in the process of manufacturing an automobile, for it is here that the components supplied by more than 4,000 outside suppliers, including company-owned parts suppliers, are brought together for assembly, usually by truck or railroad. Those parts that will be used in the chassis are delivered to one area, while those that will comprise the body are unloaded at another.


  • 2 The typical car or truck is constructed from the ground up (and out). The frame forms the base on which the body rests and from which all subsequent assembly components follow. The frame is placed on the assembly line and clamped to the conveyer to prevent shifting as it moves down the line. From here the automobile frame moves to component assembly areas where complete front and rear suspensions, gas tanks, rear axles and drive shafts, gear boxes, steering box components, wheel drums, and braking systems are sequentially installed.


    Workers install engines on Model Ts at a Ford Motor Company plant. The photo is from about 1917.

    Workers install engines on Model Ts at a Ford Motor Company plant. The photo is from about 1917.


    The automobile, for decades the quintessential American industrial product, did not have its origins in the United States. In 1860, Etienne Lenoir, a Belgian mechanic, introduced an internal combustion engine that proved useful as a source of stationary power. In 1878, Nicholas Otto, a German manufacturer, developed his four-stroke “explosion” engine. By 1885, one of his engineers, Gottlieb Daimler, was building the first of four experimental vehicles powered by a modified Otto internal combustion engine. Also in 1885, another German manufacturer, Carl Benz, introduced a three-wheeled, self-propelled vehicle. In 1887, the Benz became the first automobile offered for sale to the public. By 1895, automotive technology was dominated by the French, led by Emile Lavassor. Lavassor developed the basic mechanical arrangement of the car, placing the engine in the front of the chassis, with the crankshaft perpendicular to the axles.

    In 1896, the Duryea Motor Wagon became the first production motor vehicle in the United States. In that same year, Henry Ford demonstrated his first experimental vehicle, the Quadricycle. By 1908, when the Ford Motor Company introduced the Model T, the United States had dozens of automobile manufacturers. The Model T quickly became the standard by which other cars were measured; ten years later, half of all cars on the road were Model Ts. It had a simple four-cylinder, twenty-horsepower engine and a planetary transmission giving two gears forward and one backward. It was sturdy, had high road clearance to negotiate the rutted roads of the day, and was easy to operate and maintain.

    William S. Pretzer

  • 3 An off-line operation at this stage of production mates the vehicle’s engine with its transmission. Workers use robotic arms to install these heavy components inside the engine compartment of the frame. After the engine and transmission are installed, a
    On automobile assembly lines, much of the work is now done by robots rather than humans. In the first stages of automobile manufacture, robots weld the floor pan pieces together and assist workers in placing components such as the suspension onto the chassis.

    On automobile assembly lines, much of the work is now done by robots rather than humans. In the first stages of automobile manufacture, robots weld the floor pan pieces together and assist workers in placing components such as the suspension onto the chassis.

    worker attaches the radiator, and another bolts it into place. Because of the nature of these heavy component parts, articulating robots perform all of the lift and carry operations while assemblers using pneumatic wrenches bolt component pieces in place. Careful ergonomic studies of every assembly task have provided assembly workers with the safest and most efficient tools available.


  • 4 Generally, the floor pan is the largest body component to which a multitude of panels and braces will subsequently be either welded or bolted. As it moves down the assembly line, held in place by clamping fixtures, the shell of the vehicle is built. First, the left and right quarter panels are robotically disengaged from pre-staged shipping containers and placed onto the floor pan, where they are stabilized with positioning fixtures and welded.
  • 5 The front and rear door pillars, roof, and body side panels are assembled in the same fashion. The shell of the automobile assembled in this section of the process lends itself to the use of robots because articulating arms can easily introduce various component braces and panels to the floor pan and perform a high number of weld operations in a time frame and with a degree of accuracy no human workers could ever approach. Robots can pick and load 200-pound (90.8 kilograms) roof panels and place them precisely in the proper weld position with tolerance variations held to within .001 of an inch. Moreover, robots can also tolerate the
    The body is built up on a separate assembly line from the chassis. Robots once again perform most of the welding on the various panels, but human workers are necessary to bolt the parts together. During welding, component pieces are held securely in a jig while welding operations are performed. Once the body shell is complete, it is attached to an overhead conveyor for the painting process. The multi-step painting process entails inspection, cleaning, undercoat (electrostatically applied) dipping, drying, topcoat spraying, and baking.

    The body is built up on a separate assembly line from the chassis. Robots once again perform most of the welding on the various panels, but human workers are necessary to bolt the parts together. During welding, component pieces are held securely in a jig while welding operations are performed. Once the body shell is complete, it is attached to an overhead conveyor for the painting process. The multi-step painting process entails inspection, cleaning, undercoat (electrostatically applied) dipping, drying, topcoat spraying, and baking.

    smoke, weld flashes, and gases created during this phase of production.

  • 6 As the body moves from the isolated weld area of the assembly line, subsequent body components including fully assembled doors, deck lids, hood panel, fenders, trunk lid, and bumper reinforcements are installed. Although robots help workers place these components onto the body shell, the workers provide the proper fit for most of the bolt-on functional parts using pneumatically assisted tools.


  • 7 Prior to painting, the body must pass through a rigorous inspection process, the body in white operation. The shell of the vehicle passes through a brightly lit white room where it is fully wiped down by visual inspectors using cloths soaked in hi-light oil. Under the lights, this oil allows inspectors to see any defects in the sheet metal body panels. Dings, dents, and any other defects are repaired right on the line by skilled body repairmen. After the shell has been fully inspected and repaired, the assembly conveyor carries it through a cleaning station where it is immersed and cleaned of all residual oil, dirt, and contaminants.
  • 8 As the shell exits the cleaning station it goes through a drying booth and then through an undercoat dip—an electrostatically charged bath of undercoat paint (called the E-coat) that covers every nook and cranny of the body shell, both inside and out, with primer. This coat acts as a substrate surface to which the top coat of colored paint adheres.
  • 9 After the E-coat bath, the shell is again dried in a booth as it proceeds on to the final paint operation. In most automobile assembly plants today, vehicle bodies are spray-painted by robots that have been programmed to apply the exact amounts of paint to just the right areas for just the right length of time. Considerable research and programming has gone into the dynamics of robotic painting in order to ensure the fine “wet” finishes we have come to expect. Our robotic painters have come a long way since Ford’s first Model Ts, which were painted by hand with a brush.
  • 10 Once the shell has been fully covered 1 with a base coat of color paint and a clear top coat, the conveyor transfers the bodies through baking ovens where the paint is cured at temperatures exceeding 275 degrees Fahrenheit (135 degrees Celsius).
    The body and chassis assemblies are mated near the end of the production process. Robotic arms lift the body shell onto the chassis frame, where human workers then bolt the two together. After final components are installed, the vehicle is driven off the assembly line to a quality checkpoint.

    The body and chassis assemblies are mated near the end of the production process. Robotic arms lift the body shell onto the chassis frame, where human workers then bolt the two together. After final components are installed, the vehicle is driven off the assembly line to a quality checkpoint.

    After the shell leaves the paint area it is ready for interior assembly.

Interior assembly

  • 11 The painted shell proceeds through the interior assembly area where workers assemble all of the instrumentation and wiring systems, dash panels, interior lights, seats, door and trim panels, headliners, radios, speakers, all glass except the automobile windshield, steering column and wheel, body weatherstrips, vinyl tops, brake and gas pedals, carpeting, and front and rear bumper fascias.
  • 12 Next, robots equipped with suction cups remove the windshield from a shipping container, apply a bead of urethane sealer to the perimeter of the glass, and then place it into the body windshield frame. Robots also pick seats and trim panels and transport them to the vehicle for the ease and efficiency of the assembly operator. After passing through this section the shell is given a water test to ensure the proper fit of door panels, glass, and weatherstripping. It is now ready to mate with the chassis.


  • 13 The chassis assembly conveyor and the body shell conveyor meet at this stage of production. As the chassis passes the body conveyor the shell is robotically lifted from its conveyor fixtures and placed onto the car frame. Assembly workers, some at ground level and some in work pits beneath the conveyor, bolt the car body to the frame. Once the mating takes place the automobile proceeds down the line to receive final trim components, battery, tires, anti-freeze, and gasoline.
  • 14 The vehicle can now be started. From here it is driven to a checkpoint off the line, where its engine is audited, its lights and horn checked, its tires balanced, and its charging system examined. Any defects discovered at this stage require that the car be taken to a central repair area, usually located near the end of the line. A crew of skilled trouble-shooters at this stage analyze and repair all problems. When the vehicle passes final audit it is given a price label and driven to a staging lot where it will await shipment to its destination.

Quality Control

All of the components that go into the automobile are produced at other sites. This means the thousands of component pieces that comprise the car must be manufactured, tested, packaged, and shipped to the assembly plants, often on the same day they will be used. This requires no small amount of planning. To accomplish it, most automobile manufacturers require outside parts vendors to subject their component parts to rigorous testing and inspection audits similar to those used by the assembly plants. In this way the assembly plants can anticipate that the products arriving at their receiving docks are Statistical Process Control (SPC) approved and free from defects.

Once the component parts of the automobile begin to be assembled at the automotive factory, production control specialists can follow the progress of each embryonic automobile by means of its Vehicle Identification Number (VIN), assigned at the start of the production line. In many of the more advanced assembly plants a small radio frequency transponder is attached to the chassis and floor pan. This sending unit carries the VIN information and monitors its progress along the assembly process. Knowing what operations the vehicle has been through, where it is going, and when it should arrive at the next assembly station gives production management personnel the ability to electronically control the manufacturing sequence. Throughout the assembly process quality audit stations keep track of vital information concerning the integrity of various functional components of the vehicle.

This idea comes from a change in quality control ideology over the years. Formerly, quality control was seen as a final inspection process that sought to discover defects only after the vehicle was built. In contrast, today quality is seen as a process built right into the design of the vehicle as well as the assembly process. In this way assembly operators can stop the conveyor if workers find a defect. Corrections can then be made, or supplies checked to determine whether an entire batch of components is bad. Vehicle recalls are costly and manufacturers do everything possible to ensure the integrity of their product before it is shipped to the customer. After the vehicle is assembled a validation process is conducted at the end of the assembly line to verify quality audits from the various inspection points throughout the assembly process. This final audit tests for properly fitting panels; dynamics; squeaks and rattles; functioning electrical components; and engine, chassis, and wheel alignment. In many assembly plants vehicles are periodically pulled from the audit line and given full functional tests. All efforts today are put forth to ensure that quality and reliability are built into the assembled product.

The Future

The development of the electric automobile will owe more to innovative solar and aeronautical engineering and advanced satellite and radar technology than to traditional automotive design and construction. The electric car has no engine, exhaust system, transmission, muffler, radiator, or spark plugs. It will require neither tune-ups nor—truly revolutionary—gasoline. Instead, its power will come from alternating current (AC) electric motors with a brushless design capable of spinning up to 20,000 revolutions/minute. Batteries to power these motors will come from high performance cells capable of generating more than 100 kilowatts of power. And, unlike the lead-acid batteries of the past and present, future batteries will be environmentally safe and recyclable. Integral to the braking system of the vehicle will be a power inverter that converts direct current electricity back into the battery pack system once the accelerator is let off, thus acting as a generator to the battery system even as the car is driven long into the future.

The growth of automobile use and the increasing resistance to road building have made our highway systems both congested and obsolete. But new electronic vehicle technologies that permit cars to navigate around the congestion and even drive themselves may soon become possible. Turning over the operation of our automobiles to computers would mean they would gather information from the roadway about congestion and find the fastest route to their instructed destination, thus making better use of limited highway space. The advent of the electric car will come because of a rare convergence of circumstance and ability. Growing intolerance for pollution combined with extraordinary technological advancements will change the global transportation paradigm that will carry us into the twenty-first century.

Read more: How automobile is made – production process, manufacture, making, used, parts, components, product, industry, Raw Materials, Design, The Manufacturing Process of automobile

RadioBlock: Simple Radio for Arduino or any Embedded System

The wireless modem you’ve been waiting for. Works with Arduino & other micros. Open source mesh networking base. FCC Certified. Cheap.

So who’s behind RadioBlocks? A group of engineers who have worked on many aspects of low-power radio devices. A group of engineers who time & time again saw customers coming to us with similar requests, but with no way for us to easily fill them. So we created RadioBlocks to allow people to easily drop a radio link into their project, hence “RadioBlocks” – A simple to use radio building block.

Sure there are lots of radio boards out there. Most have two modes: super-simple serial-port replacement mode, and complex full network mode. Neither of those are useful – most people want to send some data between some devices. They need more than serial-port replacement, but the full network mode is too much hassle. Then many of those radio devices are just too expensive – are you really going to drop $30 or $40 on a single radio node, then buy extra hardware so you can attach sensors? Good luck with that!

International Shipping?

See update #2 for complete details! If you are located outside the US, you’ll need to add $13 to your reward for us to ship to you! For example, if you choose the $44 reward you’ll need to bump it to $57 for us to ship it to you!

Do the Monster Mesh

You’ll need to understand a few terms to see how this device works. First off – it’s using a Mesh network ‘behind the scenes’. Mesh just means you can dump a bunch of devices down, and they will figure out how to send data from one device to another.

Why do you care about Mesh? While it helps with one unfortunate truth of allradio devices: their range sucks. Oh sure – we get 300′ range outside, or 80′ inside, which seems like lots. But this all depends on what your walls are made of, what other devices are there, and a whole lot more. So sometimes you might get 100′ inside, but sometimes you might get 20′. Mesh means every device is a repeater as needed. Do you need to get a radio signal out of that room with reinforced concrete walls? Put a device just outside the room, and it will be used as a ‘repeater’ by the network. Of course that ‘repeater’ device is all ready configured to forward the data whether or not it “knows” the final destination, and in fact you can use that repeater as just another node (because it is just another node). Mesh network is perfect for applications like security systems or sensor networks, since you are likely to have a bunch of devices distributed around. While your front porch light sensor might not be able to talk to your backyard motion sensor directly, they can both talk to the kitchen motion sensor, which means they can still send messages to each other without you needing to do anything.

RadioBlock with SimpleMesh

So what’s different? Well the bones of our hardware is like most other products: we have a radio chip, and we have a microcontroller. The microcontroller handles all the complex stuff for you, so you just send it simple easy commands – Tell it the address to send data to and the network finds a path to to get it there.

The first major difference is we don’t hide the software in our microcontroller. It’s completely open-source. Most competitors don’t give that away – you are stuck with the features they chose. You can modify the software to your heart’s content, and we even make it easy to do so (see ‘debugging’ later). You can even implement your entire project on the RadioBlock device.

Second, we throw away everything we don’t need. Most protocols have different addressing modes, because they want you to be able to support networks of millions of nodes. Yeah, sure. We only have one address per node – you want to turn your toaster on? Alright, well just write down a 4-digit number. That is your toasters address. That’s it.

SimpleMesh doesn’t have a special ‘central’ node; you just send data to addresses. Sure one of those devices can be a computer that receives all the data, but there is nothing special about it according to SimpleMesh.

SimpleMesh is packed with more features – including security that doesn’t rely on sending an encryption key to your radio over the open SPI bus. So you can use SimpleMesh in real commercial products. You can use SimpleMesh on other hardware too, it’s not locked down to our product. But we’d really like you to buy our product, because then we can keep supporting SimpleMesh.

RadioBlock HW Features

First off – this product is FCC certified. We aren’t selling some fly-by-night product here you can only experiment with. You can install 10000 of these downtown tomorrow for your next great product.

The 4-pin header contains everything you need. Power + two serial lines. The on-board regulator means you can power the RadioBlock from 3V – 6V. The serial lines are 5V tolerant, making it easy to interface to whatever you want. You can plug this into a breadboard if you want. More on that later.

A programming header connects to the LPCLink boards. More on that later.

An expansion header provides I2C. Here you can connect some expansion boards to add in other things, such as accelerometers or extra I/O lines.

On-board LED can be used for blinking, always a hit at parties. Or, to give you a simple visual cue that things are working…

The antenna & Atmel radio chip provide the important RF functionality of the board. This gives it a range of about 100 or so meters and we have just begun testing – We’ll publish results soon.

If you want to run code on the RadioBlock, you’ll appreciate the LPC1114 microcontroller. It’s a 32-bit ARM Cortex M0 device with lots of handy peripherals. Best of all there are great low cost tools available and we’ve built a handy JTAG cable to make it easy to interface.

The board comes in two versions. One has a battery holder on the back – this version has female headers (so you won’t short out anything), and is designed so it can be deployed on it’s own.

RadioBlock & Arduino

You don’t need to plug this thing into a shield. You can just plug it into any four digital Arduino pins. This magic works because the device can be powered by setting the arduino pins high/low. The current consumption is low enough that it doesn’t stress the AVR, and the device has a regulator on-board so it isn’t bothered by the fact the voltage won’t be reliable this way.

Then the other two pins are the serial link. If you want of course you can just connect up power & the two serial lines in a more ‘classic’ way, which really means you only need two pins on any arudino to control the radio. You can use it with both 3.3V and 5V arduinos. The RadioBlock internally regulates the external power down to 3V, but the I/O lines can work with 5V logic.

The library leaves the hardware serial port on the Arudino free, so you can still use that to talk to your computer. You can even connect more than one RadioBlock to a single Arduino, if perhaps, you want to run several different networks from one device.

Arduino Library

We have a basic library now. The idea is to make it easy – just plug your RadioBlock in, and send data somewhere. We’re expanding the library to add more functions, but of course it’s all open-source so you can help too.

Arduino Shield

Really, you don’t need a shield. You can just connect up a few wires. But we’re making a shield anyway. Not because we’re greedy mind you – but because we want to add more features. In particular we’ve designed a shield that adds a AAA battery pack, which can power the Arduino & RadioBlock. Critically this shield lets the RadioBlock shut down power to the Arduino, and later turn on the system when you receive data, or even just after some delay. Powering an Arudino directly with AAA batteries would only last a few days – with this shield, the batteries could last a year, since the system can be off most of the time. We’ve got a design & prototype, but need to finish testing and get some production levels. This board is not part of this offering or part of this kickstarter project!

RadioBlock & Computers

Many people aren’t into Ardunios. That’s fine – you don’t need them to use this product. You can directly interface a node to a computer using a serial-to-USB converter. Our example Python app sends data in the proper format & parses the responses.

You can use this sort of connection to add a RadioBlock to your Linux computer too – think about having a Raspberry Pi as one device on your network. This way you could easily connect your wireless network to the internet, and have it send you an e-mail when it detects something amiss. Even if you have a hacked router firmware (DD-WRT) you could interface to this, since most of those routers provide a few serial port lines.

RadioBlock & Other Devices

Most of our lives we’ve never used Arduinos, we’ve just used microcontroller development kits. The truth is that the RadioBlock can be connected to basically anything embedded. It works at both 3V, 3.3V, and 5V (common embedded power supplies) and just needs a serial link. The simple pin-out means you might need a few wires at most – sometimes it will plug directly into other boards with existing connectors!

We’re working on example C code to talk to the RadioBlock. So you don’t really need to implement anything – you just drop down our interface code, point it to your serial port, and you’re done. Remember the software in the RadioBlock is open source, so you might even just write code on the RadioBlock itself.

Here it is plugged into a FPGA board for example, the device happens to fit into Digilent Inc PMod connectors. AVRs, PICs, 8051s, ARMs…. “RadioBlock don’t care, it’ll plug into anything!” It’s a bit promiscuous like that.

RadioBlock Add-Ons

The RadioBlock has some expansion ports. We’ve already designed a 3-axis accelerometer board & an IO expansion board to fit in here. The expansion ports will let you run code entirely on the RadioBlock without needing an external microcontroller.

The following shows the 3-axis accelerometer module on the left (yes it’s that small), and the I/O expansion module on the right. The I/O expansion includes 6 LEDs and a push-button along with 8 GPIO lines. Note only the ‘battery’ based boards include the female header that fits these. (But, based on your demand we can supply the headers…)

Debugging and Firmware Upgrades

The microprocessor on this board is a LPC1114. For $30 you can buy a ‘LPCXpresso’ kit which includes a USB JTAG & a compiler license. With that you can plug the RadioBlock into the JTAG and do full-blown debugging on the board itself:

The LPC1114 even has a hardware bootloader. That means you cannot brick the device – you can always upload a new image to the RadioBlock board itself. So get one now, and you’ll always be able to get new features we add to this same hardware.

RadioBlock & IEEE 802.15.4

The radio on the RadioBlock is a real Atmel IEEE 802.15.4 device. This means a few things: for one you can run all sorts of other software on the RadioBlock (you’ll want the $30 debugger for this) such as Contiki or TinyOS. The IEEE 802.15.4 standard also means you can use any IEEE 802.15.4 sniffer if you want, or you will be able to use our device as an IEEE 802.15.4 sniffer. 

Or you could use the RadioBlock as a way to add an IEEE 802.15.4 device to an embedded Linux computer, think of having a 6LoWPAN router (again the Rasberry Pi comes to mind here). For now you’ll need to do some extra coding, because we’re focusing on the SimpleMesh software. But the hardware would fully support your endeavours.

Oh and it can directly plug into the Raspberry Pi with the current revision, or in the future may require a wire if 5V pin changes location.


We love documentation. Too many products have poor documentation. So we’re writing as much as we can. To give you an idea, here are some links to beta versions. We’ve got more coming though, so hold onto your seat.


SimpleMesh Users Guide

Serial Protocol


Prototype – Done

As can be seen, we’ve got a prototype design done. We’ve built several of them, and have been testing them extensively.

FCC Testing/Approval – Done

FCC approval is completely done. If we get enough Canadian support we’d consider doing IC as well. Right now we can’t sell the device into the Canadian market, but we just need to pay enough to get the IC approval. When we had FCC testing done, it was done by a Canadian lab even, so getting that IC approval will be trivial if required.

Mesh Networking Software – Done

The mesh networking software (SimpleMesh) is totally done. We’ve been expanding it and adding features, but since it’s entirely open source it will always be improving.

Go here to get the source! git://

More Info here!

Python Library – In Progress

We have the Python app written now. We’re improving it and making the whole thing more compartmentalized, so you can just do something like ‘import radioblock’ and send commands.

If you haven’t used Python, it’s very quick to pick up. It runs on most computers, so is one of the best choices for cross-platform applications.

Arduino Library – In Progress

As mentioned, this is already in progress and basic functionality works.

C/C++ Library – In Progress

Most embedded work gets done in C, so we are going to provide a complete implementation of the RadioBlock interface. This gives you a simple API, the provided code will deal with creating the proper serial message format, which you send out.

Documentation – In Progress

You can see from the previous links we already have some pretty good documentation. We really want to make sure you aren’t left scratching your head though, so are planning a lot more. We especially are targeting writing some tutorials using these boards.

Production Run – Not Done

The final step is getting a production run of devices. We’ve got some manufactures lined up, but need the capital. If we can get a large first run it will help drive cost way down, making these as affordable as possible.

Galago: Electronics Prototyping Board to Make Things Better

Revolutionary Arduino alternative with an ARM chip, incredible features, great open source tools, tiny footprint and built-in debugger.

As Seen on Hack a Day:

“…the Galago might just be the perfect ARM board for tinkerers weaning themselves off the Arduino.”

Tiny Revolution.

Galago fits a powerful 32-bit ARM chip, an on-board debugger and other incredible features in a tiny format to instantly improve your electronic projects. It’s open hardware and you develop software for it with open, cross-platform and easy-to-use tools. Everything about Galago is optimized to help you make things better.

What is Galago?

Galago is a tiny revolution in rapid electronics prototyping. It combines a powerful ARM Cortex-M3 microcontroller with a hardware debugger on a tiny circuit board, allowing hobbyists and professionals alike to turn project ideas into reality faster and better than other microcontroller platforms. Galago’s debugger is the difference between starting a project … and finishing it.

How can you use it?

Plug Galago into a standard breadboard for quick prototyping, pop it into an app board to build an application or integrate it directly into your commercial product. Galago is inexpensive enough that you can leave it built into a project and is the first prototyping platform specifically designed to be cost-competitive with custom PCB engineering for small production runs, at under $10 in 1000-unit quantities. This means you can prototype a product with Galago, put it on Kickstarter and afford to produce the first batch using the same hardware you prototyped with. Incredible!

How Galago compares

Perhaps the best feature of Arduino, and the reason people choose it over other prototyping boards, is the consistent, easy-to-use development software.  There are faster, smaller, better and less expensive prototyping boards everywhere, but none have the complete start-to-finish usability of Arduino. That is, until Galago came along. Because Galago is built on the principle of putting development experience first (instead of just selling products) you can expect a friendlier, more usable board than any other, including Arduino. Moreover, Outbreak is committed to your experience and we’ll continue to improve the development tools, libraries and community integration features as we develop new app boards.

Developing with Galago

Galago plugs into a standard solderless breadboard or an app board and connects to your computer with USB. The USB connection permits downloading and debugging firmware, plus it will power Galago if it’s not connected to another power source.

Write C, C++ or Wiring code with Galago’s simple but powerful development enviroment and deploy it to the hardware with a single click. Galago’s community features make sharing code and working on it with others extremely simple.

Use the integrated debugger to pause code execution, inspect variables and continue running – this helps you quickly find software problems as they arise to fix them faster with less head-scratching.

App boards

App boards get you to your goal faster. Simply connect Galago to a suitable app board to cut time off your development schedule or accelerate your weekend project.

Because Outbreak’s app boards are Open Hardware, they can be extended, adapted and remixed to suit any project, commercial, educational or artistic.

Galago can detect the app board it’s connected to so that the correct libraries can be downloaded by the development environment. Like Arduino® shields App boards could be stacked, but because it’s very difficult to ensure mechanical and electrical compatibility between boards we encourage fusing multiple designs into a new app board. This approach also reduces cost and physical size.

With the right skills, app boards are so easy to make that you could design one for each project that you’d like to build more than, say, five copies of. Publishing this design on our site means that others can use your board as a basis for their projects, which benefits everyone!

What makes Galago different

Vision. Both in the sense that the debugger offers you vision into how your software is running on the device and in the overall vision of the project. Galago is the only platform expressly designed to be both extremely easy to prototype on and economically viable to build straight into medium-run commercial products. By allowing you to accelerate to shipping real products faster, you can try more adventurous ideas with lower risk than ever before. Galago also enables smaller teams, less-experienced designers and lower budgets. All of these advantages help you make things better.

App Board Rewards

Several contribution levels reward you with your choice of app board. When the campaign completes, you’ll be asked which one you’d like. Following are the app boards we’ve committed to build: If you have a great idea for one not on the list, suggest it and if there’s enough demand we’ll design it and make the board available during the campaign too!

  • Ethernet: This is the one shown in the video. Features a single 10Mbit Ethernet port, a micro-SD slot and an on-board power supply with barrel jack for plugged-in power.
  • Audio: Features an efficient class-D audio amplifier intended to drive an external 8-ohm speaker and a micro-SD slot. Can be powered by batteries or external supply.
  • LEDs: Designed to drive 16 channels of PWM-controlled LEDs or similar using the TLC5940 chip. Can be daisy-chained to more ‘5940s to power large-format video displays.

The Starter Kit is a reward unique to our Kickstarter campaign that consists of the following:

  • One solderless breadboard
  • One audio amplifier break-out board with headphone jack
  • A serial-in, parallel-out shift register (74HC595)
  • Light sensor
  • Temperature sensor
  • Two buttons
  • A dozen jumper wires
  • 8 LEDs, 10 resistors and 10 capacitors

Where we are

Design Galago board … DONE

Develop Galago debugger technology … DONE

Build Galago prototypes … DONE

Exhaustively test Galago prototypes over the span of two months … DONE

Complete development environment … Almost!

Manufacture first large batch of Galagos and app boards … This is where we need your help!

Kickstarter Campaign

Prototypes of Galago, some with over 18 months of testing, demonstrate the ruggedness and reliability of the platform. Galago has been subjected to mechanical torture testing, continuous power supply short circuits and thermal stress. We’re very confident in the product and we think you’ll be very happy with it too. Now we’re getting ready to manufacture a large batch, and that’s where this campaign comes in.

Electronics are expensive to manufacture unless you build a lot at once. Galago is on Kickstarter so we can raise funds to manufacture a large quantity of Galagos and app boards and get them to you at a great price. We have quotes from both domestic and offshore circuit board manufacturers and assemblers who are just waiting for us to place our orders. This is where you come in – your contributions fund the start of the next great Open Hardware electronics prototyping platform. We hope you’ll join us in this vision!

Shipping for all reward levels in the United States is free! For international orders, please add USD $7 to any reward level.

Technical specifications

Board features

  • 72 MHz 32-bit ARM CPU with 32KB of flash ROM and 8KB of RAM
  • Integrated hardware debugger
  • One high-speed SPI port, up to 36 Mbps
  • One high-speed I2C port, up to 1.5 Mbps
  • One UART/USART with hardware flow-control capability, up to 256 kbps
  • 10 high-speed PWM pins, 6 driven by 32-bit (high-resolution) timers
  • 6 ADC (analog) input pins with 10-bit resolution at over 400 KSa/sec
  • 25 GPIO (digital) input/output pins

Libraries and Software

Because Galago uses a popular ARM chip, lots of existing code and libraries can be brought to the platform with ease. Galago’s debugger provides an advantage here too because new code is always easier to get working when you have this level of insight.

Multiplo: Create Your Own Robot

A system to design and build things in an easy way. Specially Robots. And it’s Open Source.

The Need

Our project started with a need. We were educators in lack of a flexible teaching material for robotics. We were not satisfied by what was available at the market. It was clear that we needed to find a solution.

We started prototyping robots by laser cutting acrylic and wiring some breadboard to it. Soon we were giving lessons and designing parts that could be changed from one place to other to match the functions needed from the robot being built.

The Solution: a different kind of Kit

Experience at classroom lead us to keep things simple. The system has been used at primary schools, science experiments and even at industry level. So from these prototypes that we were laser cutting in our garage, we got something that had a big potential.

The concept is that you get a box that has a kit inside. We took care of all the technical details in order for each single part to be compatible. Everything you need to build a robot is inside of the box. But you can also add your own parts…

MECHANICAL PARTS: The result of what we have put together is a set of mechanical parts easy to assemble, difficult to break and simple to customize. The system is based in a mathematical relationship between dimensions of components, making them match each other. Below you can see a picture and a video of some of the parts that make a Multiplo Robot Building Kit.

ROBOT BRAINS: We decided to design and manufacture our own controller. Why? Because we wanted something user friendly, yet powerful and hackable. So here is the DuinoBot: an all-in-one robot controller with motor outputs, easy to use sensor connectors, plastic case and many more features. It’s also expandable and 100% compatible with Arduino

SOFTWARE:We know that it’s not easy to give the first steps into programming. For beginners, we handle in the controllers with a simple set of recorded actions to be commanded by a common TV remote control (please read the FAQ for details). 

If you are interested in learning programming, you might want to try ourGraphical Software. It allows users with no previous experience in programming to start in no time. That software is also free and Open Source. It has been funded through a KickStarter campaign as well. If you already know programming, you can use the Arduino IDE to program more sophisticated functions.

EVERYTHING WORKING: “Right out of the Box” are the words we like to use about how you should start using your Multiplo Kit. There is no need to program, study a wiring diagram or buy tools. No soldering or protoboard needed. You can build a simple robot in about 45 minutes. Still you will be able to re-program it’s controller, add Arduino-compatible shields like WiFi, GPS or your tailor made parts.

TRULY OPEN SOURCE MECHANICS :We know that it’s not easy to give the first steps into programming. For beginners, we handle in the controllers with a simple set of recorded actions to be commanded by a common TV remote control (please read the FAQ for details). 

COMPATIBLE AND EXPANDABLE: Our system allows you to just plug in industry standard sensors. At the same time, our own sensors are also compatible with nearly any 3rd party robot controller. We specially encourage users to get either an original Arduino or any compatible board. There are several interesting options and we already give support to some of them. You can also get original Arduino(R) TinkerKit sensors, or third party compatiblesensors. We will be adding support to more devices.

FULLY DOCUMENTED: We had prepared pictorial assembly guides, video tutorials and a full set of examples. We will be posting them during the KickStarter Campaign. You can find some of the assembly guides that are currently being used at schools at our website.

What are the Funds for?

We are now a team that includes a teacher, two engineers and a specialist in robotics. We are very proud of sticking to our principles. We will keep our word of licensing 100% of our work free and Open Source.

Our running costs as a small startup are expensive. We want to lower the price of our kits in order to flood the world with cheap and meaningful robots. But for that, we need to scale production.

Our goal is to make this technology available to more people. Also to get other developers on board, people that also think that the world should be a better place and share things.  Even if we cannot make it at this KickStarter Campaign, we are releasing all the source files for free.

Our Dream

We develop this system in order to prototype robots. But we ended inventing a platform to build things. We have been using it at public schools and it had proved to be suitable as teaching material. They are currently using it for STEM Education (Science, Technology, Engineering and Mathematics) for young students.

Making a robot was a challenge some time ago and only few could have access to that. We want to reach a critical mass that allow us to break boundaries. We have found Arduino philosophy inspiring. We think that it’s time that their concept spreads to other areas.

We want our fully Open Source Robot Kit to get into hands of educators, artists and any kind of creative people. It is our dream to get robotics close to people. We think it’s time for a change. Robotics for Everybody!