Amazon doesn’t eat its own DNS dogfood

Amazon.com uses domain name systems (DNS) from competitors instead of its own Amazon Web Services’ DNS named Route 53, according to a DNS tracking service.

wpid-wp-1408387470624.jpeg

For tech companies, using your own products and services is called “eating your own dog food,” or some call it “drinking your own champagne.” Amazon does not do that, at least for its DNS.

The issue was recently raised on Twitter and was discussed on AWS forums more than a year ago. An AWS spokesperson declined to comment publicly on the issue.

According to a search on the website Kloth.net, which provides DNS lookups, Amazon.com uses Dyn and UltraDNS to host Amazon.com, two name-brand DNS services. Route 53 is Amazon Web Service’s DNS that is used frequently to connect incoming traffic to websites hosted on AWS.

Last year users in an AWS forum questioned why Amazon.com does not us Route 53. An AWS employee initially said he could not discuss the details of the internal network configurations within AWS. When the questioner asked if Route 53 is a viable platform and what deficiencies or lack of important features have caused Amazon to not use Route 53, an AWS employee provided a slightly more detailed response.

“This is a totally fair question and concern,” the AWS employee Ben@AWS wrote. “We believe Route 53 compares well against other leading DNS providers in terms of scalability, responsiveness, and fault tolerance.” At the time, he said Amazon was migrating DNS zones to Route 53 and said some Amazon services currently did use Route 53, including Elastic Beanstalk and Alexa.com (which is an Amazon company). He added that there are customers with comparable DNS load to Amazon.com that use Route 53, but he did not name them.

There could be a legitimate reason for Amazon not to use Route 53 though. One Twitter user came to AWS’s defense: “Always a good idea to separate your DNS from your infrastructure,” user Tim Nash wrote, noting that if Route 53 had an outage, it could bring down AWS and Amazon.com, potentially preventing the ecommerce site from working and preventing AWS from alerting customers of the downtime. So perhaps spreading DNS workloads out across multiple providers is a good idea. But, Kloth.net does not show Amazon using Route 53 at all.

Shawn Campbell, a DNS expert and systems administrator for Canadian tech reseller Scalar Decisions said he was surprised to learn that Amazon.com doesn’t use Route 53. He said UltraDNS is a leader in the DNS market and he described Route 53 as a competing, up-and-coming platform compared to other more established offerings. He said typically Route 53 is a good option for customers who have many other services hosted in AWS, so he questions how much Amazon.com is using AWS overall.

On the AWS Case Studies page there is only one mention of Amazon.com using AWS, which is the example of how Amazon.com migrated the tape backup of its Oracle databases to AWS Simple Storage Service (Amazon S3). While that’s not an exhaustive list, it is the only public example AWS cites of Amazon using its own cloud service.

By Brandon Butler

Advertisements

Tubes: A Journey to the Center of the Internet

kiberbunozes2_130402

Okay, I admit that I’m a geek and have read numerous books on the history of IT and the Internet. Katie Hafner’s, Where Wizards Stay up Late, The Origins of the Internet, is a particular favorite of mine.

Along these lines, I just finished a book called, Tubes. A Journey to the Center of the Internet, by Andrew Blum, a Wired Magazine correspondent. Now Tubes does provide a bit of Internet history around the Arpanet project, BBN, the Interface Message Processor (IMP), and the original Internet node at UCLA but it takes the story in a different direction. Tubes goes on to look at the physical stuff like routers, cables, buildings, spinning disk drives, etc. – where they are, how they got there, who built them, and who manages them.

I can certainly relate to this book. Way back when during the Internet boom, I worked at a fly-by-night telecom startup named GiantLoop Network where I gained a bit knowledge about Internet pipes. Yup, I toured 111 8th Ave. in NYC (a massive telecom hotel now owned by Google) and Brooklyn’s MetroTech Center. My company also had relationships with a cast of Internet characters like AboveNet, ConEd Communications, Enron, Global Crossing, and Metromedia Fiber Network (MFN).

In this role, I got to talk the Internet talk for a while back in 2000, but Tubes helped fill in the blanks about all the stuff I didn’t know or hadn’t kept up with. For others who haven’t touched the bowels of the Internet, Tubes acts as a tour guide on major pieces of Internet infrastructure with commentary on how these pieces co-exist.

I won’t give away details, but here are a few tidbits I learned (or re-learned, I’m getting old) from this book:

Blum does a good job of describing how massive Internet connectivity came together during the boom of the 1990s. Remember Metropolitan Area Exchange (MAE) East, MAE-West, and the Palo Alto Internet Exchange (PAIX)? The book provides a good description of their development (In recounting the story of PAIX, Blum refers to Digital Equipment Corporation (DEC) as: “one of Silicon Valley’s oldest and most venerable computer companies.” Yes, PAIX was a valley-based institution, but as Blum has probably heard dozens of times since his book’s publication, Digital belonged to Ken Olsen and his fellow New England Yankees in places like Maynard, MA).
Blum set off to visit some of the biggest Internet exchanges, specifically in Frankfurt and Amsterdam. In this chapter, he does a good job at not only describing the technology aspects of each but also how these exchanges fit into their city’s geography, history, and culture. Later in the book, Blum takes the reader through a similar stroll through the telecom hotels in lower Manhattan, dragging the reader through subway conduits, telecommunications history, wire pulling in the streets of Manhattan, and into the fiber optic/Internet present. Finally, Blum follows the path of undersea fiber to places like Porthcurno (Cornwall England), Lisbon, and the U.S. Atlantic coast. He also describe the people and processes involved in picking these routes, deploying the fiber, and then connecting them to continental networks by the sea.
The book concludes with visits to massive data centers in areas like The Dalles (Google) and Prineville (Facebook) Oregon. In this chapter, Blum also meets with Microsoft executives and digs into how and why certain data center locations are chosen. Blum goes from tour guide to editorial contributor here, describing his Orwellian experience with the PR/legal-centric Google data center folks and a contrasting episode with surprisingly transparent Facebook personnel.

No, this isn’t a text book with deep technical descriptions. Rather it reads like a picaresque novel of one man’s journey for knowledge. Kind of an amalgamation of Homer’s Odyssey and a BGP routing table. Blum keeps asking questions, recounting history, and uncovering facts. As he gains knowledge, he brings the reader along for the ride.

I play in part of a rock n’ roll cover band here in Massachusetts with a few of my buddies from town. A few years ago, I learned that some of the ancient monitors we include in our sound system were actually used at Woodstock. I have no idea if this is true but it’s a great story and it gave me an emotional connection to the history of rock. My guess is that through his journey and book publication, Blum established a similar bond with the Internet infrastructure. This sense of joy and empathy comes shining through in Tubes, making it a fun read for geeks like me who never run out of questions to ask.

By Jon Oltsik

Company to demonstrate ‘Active Shooter Detection System’ in Massachusetts school

wpid wp 1409883685383

wpid wp 1409883685383

Company to demonstrate ‘Active Shooter Detection System’ in Massachusetts school

DARPA-inspired technology that promises to detect gunshots in a school, alert authorities and help first responders locate the shooter will be demonstrated this afternoon for civilian officials and members of law enforcement gathered in Methuen, Mass.

That the name of the school where this will happen, reportedly the first in the nation to be so equipped, isn’t being made public says a lot about the plague this technology is designed to address.

From a story on Boston.com:

Mayor Stephen Zanni, Schools Superintendent Judith Scannell, Police Chief Joseph Solomon and Congresswoman Nikki Tsongas are among those who were expected to be on hand, along with police chiefs and police officers from across the northeast.  The demonstration will simulate an active shooter in a school building and show how police would respond using the new technology.

The ‘‘Guardian Active Shooter Detection System’’ is triggered by the sounds of gunfire, sending an alert to police within seconds. Then, using smoke alarm-sized sensors installed throughout the school’s classrooms and hallways, it can transmit audio recordings in real time, so that emergency responders can track the shooter and monitor other developments before, during and after the person enters the building.

The company touting this technology, Shooter Detection Systems of Rowley, Mass., claims it produces “close to zero false alerts.” How close to zero that proves to be will likely be important.

The company has a six-minute marketing video that is predictably alarmist.

Will such a system actually help?

Methuen Police Chief Solomon seems convinced, having earlier told CNN: “What we always find is that seconds count … I want to go right to the target, because if I can stop or mitigate the target, I can stop the carnage.”

Call me skeptical. “Seconds count” sounds an awful lot like “if it saves just one life,” which gets used too often to defend public-safety and zero-tolerance practices that are more about appearing to do something than actually doing something.

However, it wasn’t that long ago that I would have dismissed technology like this out of hand, as I did the ever-more-common school lockdown procedures. Not anymore.

By Paul McNamara

Intel doubles capacity of its data center SSD | Network World

wpid-wp-1409567930012.jpeg

Intel doubles capacity of its data center SSD | Network World

Intel today announced upgrades to its Solid-State Drive DC S3500 Series of products that now offer up to 1.6TB of capacity, double what the previous generation had.

Intel also announced it has boosted the capacity of its M.2 form factor flash expansion card so that it can be used as a mass storage device and not simply a client boot drive.

The new S3500 M.2 expansion card comes in 80GB, 120GB and 340GB models.

“We do have customers asking for higher capacity on drives and we were able to accommodate it,” said David Ackerson, an Intel data center product line manager.

intel ssd dc s3500 series high capacity Intel
Intel’s S3500 2.5-in. form factor SSD now comes in capacities of up to 1.6TB.
intel-ssd-dc-s3500-series-m.2-100529654-primary.idge

Intel added to the M.2 card the same features that it had previously only offered in larger form factor SSDs, such as hardware-based AES 256-bit encryption and power loss protection.

“In addition to [acting as a boot drive], we expect M.2 will appeal to traditional server manufacturers that plan to offer smaller form factor servers. The S3500 M.2 provides data center performance in a small, sleek form factor to meet the needs of boot and traditional server applications,” Ackerson said.

The M.2 card could also be used as mass storage for digital signage, ATMs and other types of customer-facing devices such as digital slot machines, Ackerson added.

The M.2 flash card has a sequential read/write performance of up to 500MBps and 460MBps, respectively and a random performance of 67,000 read I/Os per second (IOPS) and 8,300 write IOPS.

“Basically, you’re getting all the performance of the 2.5-in. drive in a new form factor,” Ackerson said. The S3500 SSD also comes in a 1.8-in. form factor.

The new 2.5-in. S3500 SSD models have a top performance of 75,000 read IOPS and 18,500 write IOPS.

Up to 19% of Intel’s M.2 flash card is overprovisioned to increase write speeds; up to 10% of the new 2.5-in. SSD is used for the same purpose. The flash drives also have from 256MB to 1GB of DRAM depending on their overall flash capacity.

Intel’s latest S3500 2.5-in form factor SSD comes in two new capacities, 1.2TB and 1.6TB. Previously, the drive was available with up to 800GB of capacity.

The S3500 series SSDs can sustain up to three full drive writes per day — 880TB in writes over a lifetime. They have a 2-million hours mean time between failures rating, according to Intel. Both the 2.5-in. and the M.2 S3500 SSDs come with five-year warranties.

Intel also upgraded its NAND flash controller with additional I/O paths to address the higher density products.

Intel has set its recommended customer pricing for the 2.5-in. S3500 SSD at $1,099 for the 1.2TB version and $1,444 for the 1.6TB drive. The M.2 card will sell for $99 for 80GB, $124 for 120GB and $314 for 340GB.

This story, “Intel doubles capacity of its data center SSD” was originally published by Computerworld.

By Lucas Mearian

Microsoft’s Bing predicted midterm election with 95% accuracy

wpid-wp-1411954571382.jpeg

Microsoft’s Bing predicted midterm election with 95% accuracy

We think of Microsoft’s Bing as a search engine, but there is a lot more to it than that. For example, I’ve found its translation service to be very good, more accurate than Google’s, especially with Asian languages. The translations aren’t perfect but they give me a better idea of what is being said than Google Translate.

Well, Bing has another hidden gem: Bing Predicts. Using a secret analytics tool Microsoft isn’t about to disclose, it has been used to predict NFL games and was nearly flawless in its predictions for the World Cup this past summer.

Now that the dust has settled from the elections, Bing Predict has won out again with a 95% accuracy rate in calling the House, Senate, and Governor’s races. It got 34 out of 35 Senate races correct, 419 out of 435 House seats correct, and 33 out of 36 Governor’s races correct. That’s a better prediction rate than even Nate Silver’s lauded FiveThirtyEight blog.

If you are one of the few with a Windows Phone that has Cortana, the digital assistant is powered by Bing Predict, so you can ask questions to Cortana and she might have an answer for you.

Now, here’s a real challenge. It predicts “Duck Dynasty” teen star Sadie Robertson will win this season’s “Dancing With The Stars,” which would go completely against all the momentum for former “Fresh Prince of Bel Air” star Alfonso Ribero, the favorite to win. It also predicts the Indianapolis Colts have a 67% chance of beating my beloved New England Patriots this Sunday, to which I say phooey.

By Andy Patrizio

WireLurker malware threatens to destroy a key Apple advantage

wpid-wp-1409810604938.jpeg

WireLurker malware threatens to destroy a key Apple advantage.

Deserved or not, Apple’s Macintosh and iOS operating systems have long enjoyed a reputation as being largely immune to the kind of virus and other malware problems that have plagued Windows—and to a lesser extant Android—over the years.

Looked at objectively, that reputation has some basis in fact, especially on the tightly controlled iOS side, and also benefits from Apple being a far less lucrative target for criminals than Windows. With iOS’s worldwide popularity and Macintosh’s rising market share, however, the security pressure on Apple has never been higher.

So while the new WireLurker malware does not yet appear to have attacked Apple users outside of China, its very existence could threaten that extremely valuable reputation. Apart from any actual damage WireLurker or other malware might to do Apple systems, the more immediate danger is that significant numbers of Apple users might lose confidence in the relative security of their devices.

That’s already starting to happen, as media outlets sound the alarm and try to put the threat in perspective. Competitors and their supporters, meanwhile, are only too happy to try to pop Apple’s veneer of security.

That’s why it’s so essential that Apple come up with a credible, proactive response to WireLurker before it makes a dent in security for users of non-jailbroken phones who haven’t visited the compromised app story in China.

WireLurker is far from the first threat to Apple security, of course (see Apple’s iWorm fix still leaves major hole). But so far the threats haven’t been significant enough to change perceptions or behavior.

If that changes and Apple was to lose the perception of increased security, it wouldn’t kill the company. After all, ongoing security issues didn’t kill Windows or Android. But it would remove a key competitive advantage that helps burnish the Apple brand and allows it to be successful even when competitors offer similar features first or at lower prices.

Will WireLurker change behavior?

I’ve long worried that the world is waiting for the first widespread mobile security breach. I honestly don’t think that WireLurker will turn out to be that incident. But that’s not really the question.

The issue is whether WireLurker will turn out be the moment when Apple users no longer feel invulnerable to malware and start seriously worrying about the kind of anti-virus and other anti-malware countermeasures that users of other platforms take for granted.

For example, every corporate Windows PC I’ve ever used had anti-virus and other security software installed. Macs? Not so much. Even the conservative Fortune 500 companies I’ve worked for don’t routinely equip Macs, much less iPhones and iPads, with anti-malware solutions. And I’m pretty confident that’s the case for most people reading this as well. Having to add that hassle, expense, and performance overhead to Macs—and to iOS—would be a real drag.

I’m hoping it’s not necessary just because of WireLurker. But I’m resigned to the likelihood that no matter what Apple does now, something will make it happen sooner rather than later. At that point, all we’ll be able to say about Apple’s long, charmed run on the security front is, “it was nice while it lasted.”

By Fredric Paul

Pi, translated: The evolution of Raspberry Pi

 

wpid wp 1409883685383

wpid wp 1409883685383

Pi, translated: The evolution of Raspberry Pi | Network World.

A brief history of Pi

The Raspberry Pi has been the object of a great deal of nerdy affection since its initial release in 2012. A mousetrap-sized, self-contained single-board computer, the Pi is designed to serve as both an educational tool and a handy option for hobbyists – who have turned it into, well,pretty much anything you can think of. Here’s a look back through the brief but illustrious history of the Pi.

Read More…

How the FCC “THINKS” it can justify regulating U.S. internet

unintended-consequences1-e1337431109508

Throwing his full weight behind net neutrality, President Obama released a statement yesterday supporting the regulation of an open internet. The President’s statement didn’t have the same impact of Last Week Tonight’s John Oliver’s net neutrality rant, which ultimately broke the Federal Communications Commission’s website. But the President was heard and will bring the net neutrality discussion back to regulating an open internet.

See also: Obama’s net neutrality proclamation won’t help solve the problem

Comparing worldwide internet speeds with those in the U.S. and South Korea, home to a government-regulated internet, bolsters the President’s argument. Beginning in 1981, advanced telecommunications became a pillar in the Korean government’s educational and economic plans. Charged with modernizing telecommunications, the Korean Telecommunications Authority replaced the slow-moving South Korean Post and Telecom Ministry’s bureaucracy. South Korea made the information superhighway the core of an urgent economic restructuring, turning smoke-stack industries into an information technology economy that would compete with the rest of Asia. The results of South Korea’s choices of policy and competition are clear.

111114 chart

Understanding the background of net neutrality, which is often described as tearfully boring, isn’t the exclusive domain of policy analysts and regulators. Here’s the short form version, explaining the tall poles holding up the tent of net neutrality.

When Congress passed the Telecommunications Act of 1996, it deregulated telecommunications and created a virtuous cycle of innovations. The act failed at providing much choice for residential internet access. Susan Crawford, a visiting professor at Harvard Law School, said in a recent interview with NPR:

“for at least 77% of the country, your only choice for a high-capacity, high-speed Internet connection is your local cable monopoly.”

In 2010, when ISPs started to exercise their choke-hold to demand payments to create a fast lane for content providers like Netflix, the FCC issued its Open Internet Order that created net neutrality rules, which prohibited Internet service providers from blocking content and prioritizing certain kinds of traffic.

Verizon challenged the order and won on appeal before the United States Court of Appeals for the District of Columbia in January of this year. The court didn’t stop at striking down the FCC’s Open Internet Order, but clarified how the FCC could regulate an open internet under two provisions of the act.

Now, this gets a little wonky. The court interpreted Section 706 of the act to give the FCC the authority to “encourage the deployment on a reasonable and timely basis of advanced telecommunications capability to all Americans…Contrary to Verizon’s arguments, we believe the Commission has reasonably interpreted section 706(b) to empower it to take steps to accelerate broadband deployment if and when it determines that such deployment is not ‘reasonable and timely.'”

This can be compared to telephone universal service, promoting telephone service for all Americans. However, the decline in American leadership in broadband isn’t encouraging about the FCC’s ability to encourage unregulated ISPs to build out broadband comparable to what can be found elsewhere across the globe.

The President recommends an alternative to the weak authority of Section 706, regulating the internet using Title II of the act. It’s analogous to the way electric utilities are regulated. Electricity, an essential service in everyone’s’ lives, delivered through utilities that hold monopoly positions are regulated because consumers don’t have an alternative if the utility raises prices unreasonably. Electric utilities are managed to produce a fixed return on investment. If a utility wants to raise prices to cover the increased cost of improved services, the utility’s plan and ROI consistency must win regulatory approval.

Verizon and other internet access providers have a monopoly in 77% of the U.S., according to Crawford. Therefore, the FCC could choose to regulate internet access providers. The justification is that, without price competition or regulation, the internet access providers can increase prices without investing in improved service. The cost of internet access to consumers in the U.S. proves that the act failed to spark a virtuous cycle of internet innovation.

The Open Technology Institute’s October policy paper reports that internet access costs American consumers 25% more than their European counterparts for equivalent services. It also points out what could be possible. South Korea’s KDDI delivers 1 Gbps for just $30 per month, and Google delivers the same capacity in selected U.S. markets for $70 per month, a price that was considered shockingly low in the U.S. when Google introduced it.

The President can’t order the FCC to act, though. It gets its funding and oversight from Congress, and any new open internet rules must be voted on by its board of five commissioners, consisting of three Democrats and two Republicans. But the decline in the country’s internet ranking and its high cost to consumers, an urgent issue, warrants the President bringing regulation to the forefront in the discussion.

By Steven Max Patterson

 

Commentary by Jarrett Neil Ridlinghafer Follows:

Just because South Korea which is NOT AMERICA (Thank God) has the fastest Internet….THAT’s This guys Argument for Justifiying the Federal Government Takeover of the Internet? Let’s look at just how GREAT they Feds are at managing ANYTHING:

1. The US Postal Service, 100 Million A Month in the Negative and No Sign of it EVER making money or even breaking even…..

2. The Congressional Bank….CLosed due to so many bounced Personal Checks by Congressmen and women it went Bankrupt

3. Fanny Mae – Billions in Debt and Bailed out yet still in debt….

4. The Senate (Controlled By the Democrat Party)- NO BUDGET SUBMITTED (Much less passed) In 4 Years

5. The Supposed Housing Crisis and Supposed “Bank Bailout Crisis” both staged by the feds…like they believe they really faked the US Citizens out as they all lined their own pockets and repaid Political Debts with our hard-earned money

6. The VA where just last month a retired marine blew his brains out in front of his local VA Hospital because they refused to give him Pain Killers he required to stay sane…..

These are the people we want controlling the Internet which, with Nanotechnology, robotics, Cloud and Smarthome technology will have access to EVERYTHING AND EVERYONE?

 

FORGET IT!!! KEEP THE FREE ENTERPRISE AND BUSINESSES IN CHARGE AND LET THE FCC REGULATE MONOPOLISTIC BEHAVIOR PROPERLY! Rework the Monopoly laws with common-sense rules and then maybe we wouldn’t even be having this argument today!

U.S. sets sights on 300 Petaflop supercomputer

WASHINGTON — U.S. officials Friday announced plans to spend $325 million on two new supercomputers, one of which may eventually be built to support 300 petaflops, faster than any supercomputer running today.

The U.S. Department of Energy, the major funder of supercomputers used for scientific research, wants to have the two systems – each with a base speed of 150 petaflops – possibly running by 2017. Going beyond the base speed to reach 300 petaflops will take additional government approvals.

If the world stands still, the U.S. may conceivably regain the lead in supercomputing speed from China with these new systems. But how adequate this planned investment will look three years from now is a question.

The DOE also announced another $100 million in “extreme” supercomputing research spending.

The funding was announced at a press conference at the U.S. Capitol attended by lawmakers from both parties. But the lawmakers weren’t reading from the same script as U.S. Energy Secretary Ernest Moniz when it came to assessing the U.S.’s place in the supercomputing world.

Moniz said the awards for the two systems, which will be built at the DOE’s Oak Ridge and Lawrence Livermore National Laboratories, “will ensure the United States retains global leadership in supercomputing.”

But Rep. Chuck Fleischmann (R-Tenn.) put U.S. leadership in the past tense. “Supercomputing is one of those things that we can step up and lead the world again,” he said. The Oak Ridge lab is located in his state.

And Rep. Dan Lipinski (D-Ill.), whose state is home to the Argonne National Laboratory, said the U.S. lead “is being challenged by other countries,” and pointed out that the U.S. has dropped from having 291 supercomputers in the Top500 list to 233.

“Our technology lead is not assured,” said Rep. Bill Foster (D-Ill.), who lamented the movement of computer chip manufacturing to overseas locales.

In an interview, Foster said he believes there is good bipartisan support for supercomputing research, but the research may face a problem if GOP budget proposals in the House slash science funding by double-digit percentages.

It’s “going to be very hard to defend supercomputing budgets if you’re facing that sort of cut across all of science,” Foster said.

The U.S. leads the world in supercomputing in terms of the dominance of its vendors, research capability and, as Lipinski pointed out, in the overall number of systems in the top 500, but not in speed.

China has the top-ranked system, the Tianhe-2, at about 34 petaflops, and Japan and Europe have major investments underway in this area. (A petaflop is 1,000 teraflops, or 1 quadrillion floating-point operations per second. An exascale system is 1,000 petaflops.)

the Titan, despite using the same amount of power.

The new system to be built  at the Lawrence Livermore in California will known as Sierra.

These systems will use IBM Power CPUs and Nvidia’s Volta GPU, the name of the chip still in development.

Bill Dally, chief scientist at Nvidia, said in an interview that the GPUs will provide 90% of the compute capability on the new DOE machines. The improvement in power efficiency involved getting rid of overhead, including logic operations not directly involved with computation. Nvidia also looked at the data movement and focused on architectures to improve efficiency, such as colocating processes and minimizing the distance in which the data has to move.

Dally said chip efficiency will have to improve by a factor of 10 to get to exascale performance levels, but he believes that’s possible with this architecture. “We have enough things on our target list,” he said, referring to possible changes in the chip design.

The DOE announcement was made on the eve of next week’s supercomputing conference in New Orleans.

Moniz said supercomputing leadership is about not only the speed of the computer, but also how one matches and integrates that with the algorithms and software. And in that area, the U.S. has the deepest experience, he said, adding “we will sustain that leadership.”

By Patrick Thibodeau

This story, “U.S. sets sights on 300 petaflop supercomputer” was originally published by Computerworld.

Google’s super-secret process for finding potential employees


Early on in "The Matrix," Neo wakes up sitting in his desk chair to see a prompt on his PC monitor – "follow the white rabbit" – that ultimately leads him to the man he’d wanted to work with. Judging by a series of discussions on Hacker News, Google may be employing similar tactics.

Some programmers have reported receiving a prompt on their screen while browsing information on Python programming that invites them to Google Foobar, where they can solve difficult coding problems. No one can log into the site unless they’ve logged in before, suggesting that it’s an invite-only page. Here’s how a Hacker News userdescribed his invitation:

I was Googling some Python topic when my search results page suddenly split in the middle with some text saying something to the effect of "You speak our language, would you like to take a test?", linking to http://www.google.com/foobar/ .

I followed it and was led to a pseudo-shell, where I then found some coding problems. I can return to the page to continue working on them.

The discussion on Hacker News quickly turned to Google’s ambitions with the project. Many speculate that it’s an automated way to crowdsource potential employees through its search engine. Those who browse enough advanced information relating to the kind of programming Google is looking for might be a good fit, so why not devise a tool that reaches out to them? The coding tests can simply weed out those who might not be skilled enough, and could potentially uncover a "Good Will Hunting" kind of genius just waiting to solve a math problem on a chalk board.

Of course, some were skeptical and even annoyed at a Google tactic that appears to rely on large-scale monitoring of search results. But one Hacker News user who hinted at being a Google employee suggested that everybody relax:

Disclaimer: my opinions are my own and not representing those of my employer or co-workers. I have no direct relationship to this project and haven’t looked it up internally.

Has it occurred to any of you that we might do these things for sheer fun, because doing that is not only allowed but celebrated?

The Daily Dot has already covered this and the discussion is spreading to Reddit, so Google searches for Python information will probably spike in the next few days. Sure, you could always just find Google’s job listings the old-fashioned way, but wouldn’t it be more fun to see if your search habits make you seem smart enough for a job offer?