On A Mission – By Marc Andreessen

I enjoyed Marc’s recent article so much I thought I would share it to my blog readers as well…So I hope you enjoy 🙂


On a Mission



One of the interesting things I have seen, especially in the last 10 years, is that many of the big winners in technology have been what I call “mission-driven” versus “mercenary-driven” companies.

There are a lot of companies that cut corners. There are a lot of companies that have a mercenary outlook, and will dump their idealistic goal to make a business work in the short-term. We steer clear of those. We are looking for the companies who are going to be the big winners because they are going to cause a fundamental change in the world, as opposed to making a short-term grab for revenue or a short-term grab for an acquisition.

These are the founders who come in to the firm and say, “Look, I don’t care whether I make money or not, that’s not my goal. I want to change the world in the following way. I have this mission…” As Steve Jobs used to say, “I want to make a ding in the universe. I want to make beautiful products that people love.” Or Mark Zuckerberg: “I want to make the world more open and connected.” Or Larry Page and Sergey Brin: “I want to index the world’s information.”

How they will make money is typically not part of the conversation. These companies, and Google is a great example, usually have no business model. There is this vague notion of generating revenue. So you always wonder with your investor hat on, “Am I funding a social mission or am I funding a company? What’s going on here? Will they make compromises so fatal in the direction of pure ideology that they won’t actually ever build a business?”

But the pattern at the moment is the stronger the ideology or mission of the company, the more successful the company. I think a lot of that has to do with recruiting. A lot of the best people in the field don’t want to just work for money.

Let’s say you are founder of a company, and you are competing with 1,000 other founders to hire the smartest people coming out of the best universities. If you go in with a pitch that says, “You are going to make $120,000 a year, come work for us,” that is not as effective as, “You are going to change the world, and oh, by the way, you are going to make $120,000 a year.” So mission-driven companies seem to have a gigantic leg-up in recruiting, and that ripples through to morale and ultimately to retention.

Conversely, the purely mercenary startups we see, they generally don’t go well. They aren’t able to get good people, and don’t end up having a message that can punch through the noise. They don’t tend to go anywhere.

The Machiavellian view on this is if you are the founder you actually want to pretend you have a huge ideological mission, even if you don’t. And I guess you would rather do that, than not have one, but clearly it helps enormously to have a real mission.

By Jarrett Neil Ridlinghafer
Chief Technology Analyst & Consultant
Compass Solutions, LLC
Cloud Consulting International

SaaS Global Trends for 2014 – An Analysis


Cloud Market Overview for 2014

2014 is set to be a stellar year for Cloud Computing in general and SaaS in particular. An estimated 70% of SaaS customers are from “Small to Medium Businesses” with 80% of Enterprises still concerned with the Risk issues of both Security & Compliance. With 2013 being hailed as the “Year of the Internet Breaches” where everyone from Amazon to Ebay, Facebook and Forbes, IRS to Target being hacked and literally hundreds of millions of customer data being compromised, it’s no wonder “Steve Wozniak” in his now famous 2012 statement, in respect to the public cloud saw “…horrible problems…” with the cloud due primarily to the blatant lies in marketing which claim the cloud is “safer than your own private enterprise infrastructure” an actual claim on numerous websites and in interviews by executives at Amazon, Google and other major Public Cloud companies which, in my opinion (just like Steve said) has definitely harmed the whole cloud market from a trust perspective as executives become more educated about the realities of cloud computing and the public cloud in particular, and realize those kinds of statements are far from the truth it has left a bad taste in many executives mouths for the public cloud, which I’ve experienced first hand. What it has done is made enterprise executives who see these blatant lies for what they are, much more cautious which, in the long run is a good thing since, I believe, they should take time and not jump into the public cloud without careful consideration, a lot of planning and a complete & unbiased third party risk assessment.

So, although cloud sales are steadily increasing, with total Global Cloud spending expected to reach $75B-$100B globally (depending on which analyst you speak to) in 2014, I believe the frequent and highly publicized breaches have hurt the market and so, look for a slow-down in the SaaS market overall, as companies take stock to address the security and compliance concerns before they stick their big toe into the swamp that has become the cloud market, to “test the waters” so to speak.

SaaS Service Management

According to new research by Enterprise Management Associates (EMA), most IT organizations have limited visibility into the usage and cost of public SaaS applications. I believe in 2014 you will see more organizations seeking solutions to this problem and for that reason I see the “SaaS management” market as one to keep an eye on in 2014. Both new startups and existing “SaaS Management” businesses should see an increase in sales over the next few years and you should also see a market consolidation with the best players acquiring or putting out of business the lesser product/service vendors.

Security & Compliance

web hosting

Security & Compliance will be the real winner in 2014. Again with 2013 being named the “Year of the Security Breaches” and hundreds of millions of private data being compromised at mainstream websites look for SaaS Enterprise growth to slow down even more while start-ups race to secure market share of this massive new “Public Cloud Security & Compliance” emerging market.

Currently an estimated 80% of Enterprises fear to enter the Public SaaS market (according to multiple surveys), and for good reason as the risks have been shown to be astronomical indeed. As new software, hardware and services begin to appear to address these concerns such as the new CSPComply service by Compass Solutions, LLC of Washington DC and others, we should begin to see an uptick in SaaS adoption begin to appear however, I would not hold my breath for that to occur in 2014, look for this increase to be more pronounced in 2016 and beyond.

Medical-Device Technology


Another emerging market which is expected to be massive indeed is the so-called medical-device market. Products such as the FitBit Flex wrist band and FitBit Aria wireless scale seem to motivate people to reduce weight and exercise more, as studies are beginning to show us. Currently there are socks, ankle bracelets and even sneakers which all work together as a unit with embedded sensor technology to show your heat dispersion pattern on your feet, how many miles you’ve walked, your average rate of speed, how many calories you’ve burned, your heart-rate and other data including your travel patterns.

Many of these devices utilize a SaaS infrastructure, which will, along with additional factors will lead to more “service” oriented SaaS business models to begin to develop in 2014 through the next few years with the Medical-Device Market being a major influence.

“Hyperconverged Network Paradigm”


As technologies such as the next wave in wireless 802.11ac begin to take hold bandwidth in the first-wave 80MHz products will deliver throughout from 433 megabits per second on the low end to a maximum of 1.3 gigabits per second at the physical layer & more dense modulation schemes of up to up to 256 quadrature amplitude modulation (QAM), compared to 802.11n’s 64 QAM, for a 33 percent improvement. The new protocol doubles multiple input, multiple output (MIMO) capabilities. The increase moves from 802.11n’s four spatial streams to eight streams. For users, this means a speed boost, greater up-link reliability and opportunities for improved down-link reliability as well. For the internet this means massive increased bandwidth issues which, was why the IPv6 Protocol was developed in the first place…Look for IPv6 to also be a big winner in 2014 and beyond as more and more ISP’s, Backbone Providers and Telco’s begin migrating over from IPv4 do to the massive increase in bandwidth demands in 2014 and beyond.

A growing number of manufacturers are already shipping first-wave 802.11ac products for consumers and plan to expand offerings for business and enterprise network environments in the coming year. A Second Wave in Performance Speed and efficiency will ramp up even higher when Wave 2 devices for 802.11ac arrive. They’ll offer additional improvements in channel bonding by handling up to 160MHz, along with support for four spatial streams. These capabilities will help second-wave devices achieve throughput of around 3.47Gbps. (Source Cisco)

Worldwide smartphone shipments grew 40%, to more than 1 billion units, in 2013 and are on pace to reach 1.7 billion units by 2017.  (Source: CDW)

In 2014 we will see an increase of both “Smart” mobile devices, increased access to unlimited storage space on these devices via the emerging free and subscription cloud storage and a “Quadrupling” increase in bandwidth beginning as 802.11ac takes hold and most important of all the majority of the mobile carriers will finish large portions of their 4G/LTE infrastructure upgrades over the next three years starting with some pretty large expansions this year which most analysts seem to ignore or forget even though this will be by far the largest contributor to the “hyperconverged” market as wherever these increased bandwidths occur, increases in sales of “smart”, “handheld” mobile device sales increased dramatically. Look for all of these emerging technologies to trigger a number of unique situations as well as opportunities and even new markets. This type of increased traffic has never been experienced before and my prediction is it will cause many problems with unprepared SaaS infrastructure capacity which you should be prepared for in 2014 however, even more important will be the emergence of the “Hyperconverged” network, the increased importance of the end-point device within the Enterprise market and the increased importance of the emerging “Bring Your Own Device” & “Bring Your Own Technology” (BYOD/BYOT) market and the management of as well as the security & compliance issues associated with it. Look for these emerging technologies and markets to become major influencers receiving large boosts in both Capital investiture as we as large sources of new ideas and SaaS solutions to address the issues which will be created by this new paradigm.

However the reality is that all of these devices as they begin to communicate back to the cloud will begin to seriously erode the bandwidth capabilities of the current infrastructure, so look for startups with unique ideas of mitigating this increase in traffic to play a niche yet exciting and influential role as the “idea” people and “Think-Tanks”  & “Brain-Trusts” such as the new “Synapse Synergy Group” begin to come into their own in 2014 and beyond” – Quote by Jarrett Neil Ridlinghafer

Market Consolidation



Finally, we should see a lot more consolidation of the SaaS market with the winners and losers become clearer in 2014. Look for Amazon to steam ahead and broaden their lead, Salesforce will continue to be strong although they are already looking for ways to broaden their market as their primary business slows down, Microsoft will attempt to reinvent themselves in the Cloud with their new CEO at the helm and Oracle and IBM should begin to capture more market share as both of their new services start to take hold. As for VMware it still seems a bit too early to say one way or another. They did not come out with a big splash and a few Billion Dollars to throw around like IBM, they actually do very little marketing which makes one wonder, are they really ready or did they jump early in order to stop their slide to cloud obscurity or are they so confident they just don’t need to advertise their cloud offering? They obviously have a massive private cloud and enterprise infrastructure base from which to draw on, so one would hope with their vendor specific offering, that all those VMware Enterprise infrastructures will pay off as Hybrid becomes a much larger player over the next 3-5 years in the Enterprise.

Analysis By Jarrett Neil Ridlinghafer
Chief Technology Analyst & Consultant
Compass Solutions, LLC
Cloud Consulting International

Science Fiction? Star-Trek? Nope..The Coolest Top 10 Emerging Technologies For 2014

Technology Innovation1

The World Economic Forum, famous for its annual Davos convention in Switzerland, has put out a new report identifying the top technological trends for the coming year.

“Technology has become perhaps the greatest agent of change in the modern world,” writes WEF’s Noubar Afeyan. “While never without risk, positive technological breakthroughs promise innovative solutions to the most pressing global challenges of our time, from resource scarcity to global environmental change.”

“By highlighting the most important technological breakthroughs, the Council aims to raise awareness of their potential and contribute to closing gaps in investment, regulation and public understanding,” he writes.

From wearable electronics to brain-computer interfaces, here are the big technologies to look out for this year.

1. Body-adapted Wearable Electronics

Kevin Smith/Business Insider

“These virtually invisible devices include earbuds that monitor heart rate, sensors worn under clothes to track posture, a temporary tattoo that tracks health vitals and haptic shoe soles that communicate GPS directions through vibration alerts felt by the feet.

“The applications are many and varied: haptic shoes are currently proposed for helping blind people navigate, while Google Glass has already been worn by oncologists to assist in surgery via medical records and other visual information accessed by voice commands.”

2. Nanostructured Carbon Composites


“Emissions from the world’s rapidly-growing fleet of vehicles are an environmental concern, and raising the operating efficiency of transport is a promising way to reduce its overall impact.

“New techniques to nanostructure carbon fibers for novel composites are showing the potential in vehicle manufacture to reduce the weight of cars by 10% or more. Lighter cars need less fuel to operate, increasing the efficiency of moving people and goods and reducing greenhouse gas emissions.”

3. Mining Metals from Desalination Brine

REUTERS/ Eduardo Munoz

As freshwater continues to dwindle, desalinating seawater has emerged as an option. “Desalination has serious drawbacks, however. In addition to high energy use, the process produces a reject-concentrated brine, which can have a serious impact on marine life when returned to the sea.

“Perhaps the most promising approach to solving this problem is to see the brine from desalination not as waste, but as a resource to be harvested for valuable materials. These include lithium, magnesium and uranium, as well as the more common sodium, calcium and potassium elements.”

Grid-scale Electricity Storage


“There are signs that a range of new technologies is getting closer to cracking [challenges]. Some, such as flow batteries may, in the future, be able to store liquid chemical energy in large quantities analogous to the storage of coal and gas.

“Various solid battery options are also competing to store electricity in sufficiently energy-dense and cheaply available materials. Newly invented graphene supercapacitors offer the possibility of extremely rapid charging and discharging over many tens of thousands of cycles. Other options use kinetic potential energy such as large flywheels or the underground storage of compressed air.”

5. Nanowire Lithium-ion Batteries

REUTERS/Yuya Shino

“Able to fully charge more quickly, and produce 30%-40% more electricity than today’s lithium-ion batteries, this next generation of batteries could help transform the electric car market and allow the storage of solar electricity at the household scale. Initially, silicon-anode batteries are expected to begin to ship in smartphones within the next two years.”

6. Screenless Display

AP/Christof Stache

“This field saw rapid progress in 2013 and appears set for imminent breakthroughs of scalable deployment of screenless display. Various companies have made significant breakthroughs in the field, including virtual reality headsets, bionic contact lenses, the development of mobile phones for the elderly and partially blind people, and hologram-like videos without the need for moving parts or glasses.”

7. Human Microbiome Therapeutics

Getty Images

“Attention is being focused on the gut microbiome and its role in diseases ranging from infections to obesity, diabetes and inflammatory bowel disease.

“It is increasingly understood that antibiotic treatments that destroy gut flora can result in complications such as Clostridium difficile infections, which can in rare cases lead to life-threatening complications. On the other hand, a new generation of therapeutics comprising a subset of microbes found in healthy gut are under clinical development with a view to improving medical treatments.”

. RNA-based Therapeutics

Abid Katib/Getty Images

Developments in basic Ribonucleic acid (RNA) science, synthesis technology, and in vivo delivery i.e. in a living organism,  “are combining to enable a new generation of RNA-based drugs that can attenuate the abundance of natural proteins, or allow for the in vivo production of optimized, therapeutic proteins. Working in collaboration with large pharmaceutical companies and academia, several private companies that aim to offer RNA-based treatments have been launched.”

. Quantified Self (Predictive Analytics)

Julian Finney/Getty Images

“Smartphones contain a rich record of people’s activities, including who they know (contact lists, social networking apps), who they talk to (call logs, text logs, e-mails), where they go (GPS, Wi-Fi, and geo-tagged photos) and what they do (apps we use, accelerometer data).

“Using this data, and specialized machine-learning algorithms, detailed and predictive models about people and their behaviors can be built to help with urban planning, personalized medicine, sustainability and medical diagnosis.”

10. Brain-computer Interfaces

REUTERS/ Morris MacMatzen

The ability to control a computer using only the power of the mind is closer than one might think. Brain-computer interfaces, where computers can read and interpret signals directly from the brain, have already achieved clinical success in allowing quadriplegics, those suffering ‘locked-in syndrome’ or people who have had a stroke to move their own wheelchairs or even drink coffee from a cup by controlling the action of a robotic arm with their brain waves. In addition, direct brain implants have helped restore partial vision to people who have lost their sight.”

By Jarrett Neil Ridlinghafer
Chief Technology Analyst
Compass Solutions, LLC


Hennessey Venom GT is the world’s fastest production car, beating out the Bugatti Veyron Super Sport


The Hennessey Venom GT has been named the world’s fastest production car, taking away the title from the Bugatti Veyron Super Sport. The fastest street-legal production car in the world has a maximum speed of 265.7 mph, while the production Veyron, which features a tuned engine, maxes out at 258 mph. The Veyron Super Sport still holds the title as the world’s fastest car, with the pre-production model reaching 267.8 mph.

As a matter of semantics, the Hennessey Venom GT has been named the world’s fasted production car, outpacing the production Bugatti Veyron Super Sport.

On a technicality, the Hennssey Venom GT has cinched the title of world’s fastest production car with a speed of 265.7 mph from the reigning champion, the Bugatti Veyron Super Sport.

The technicality in question is engine tuning. A standard production version of the Hennessey Venom GT managed to come close to 270mph along a 2.9 mile stretch of runway whereas the Bugatti Veyron Super Sport, using Volkswagen’s 5-mile+ straight, got closer still, hitting 267.8 mph (431kph). After setting its record, the production version of the car had its engine slightly detuned so that its top speed would be limited to a mere 258mph in order to guarantee the tires don’t disintegrate.

Therefore, though it is still technically the world’s fastest petrol engine car, the Bugatti is not the world’s fastest street-legal production car.

The Hennessey Venom GT was tested in February but the figures have only now been ratified. The car, which is loosely based on the Lotus Exige, is powered by a twin turbo-charged 7.0-liter V-8 engine. It pumps out 1244 horespower and, as the car weighs exactily 1244kg, its power to weight ratio is 1000 horsepower per ton.

“While a Veyron Super Sport did run 267.8 mph, Bugatti speed-limits its production vehicles to 258 mph,” said company founder and president John Hennessey. “Thus, at 265.7 mph the Venom GT is the fastest production car available to the public.” Hennessey also suggested that his company was at a disadvantage because it only had a 2.9-mile runway over which to set its benchmarks, whereas the Veyron had the luxury of using Volkswagen Group AG’s private test track located near Ehra-Lessien, Germany which, at 5.9 miles, has one of the world’s longest straight sections of track. “Afforded the same distance to accelerate, the Venom GT would exceed 275 mph,” said Hennessey.

In February of 2013, the Venom GT officially became the quickest accelerating production vehicle in the world as it ran 0-300 km/h in 13.63 seconds, thus establishing a new Guinness World Record. The car also managed to go from 0-60 mph in 3.05 seconds and 0-100 mph in 5.88 seconds, and it ran the standing quarter-mile in 10.29 seconds at 158.83 mph.

Only 29 Venom GTs are scheduled to be built and each will cost their lucky owner $1.2 million plus shipping, not including options. Hennessey claims that the first 10 have already been sold.

By Jarrett Neil Ridlinghafer
Chief Technology Analyst
Compass Solutions, LLC

Former BlueHat Prize winner pwns Microsoft, researcher bypasses all EMET protections


Security researcher Jared DeMott, who formerly won third place in the BlueHat prize, showed how to attack and bypass all of EMET’s protections.

At BSides security conference in San Francisco, Bromium Labs’ security researcher Jared DeMott showed attack code capable of bypassing “all of the protections” in Microsoft’s free Enhanced Mitigation Experience Toolkit (EMET) 4.1.

Many people believe EMET can prevent attackers from exploiting holes, such as zero-days spotted in the wild, and gaining access to computer systems, but it has been bypassed before; Microsoft pointed out, “EMET is not a shield that’s guaranteed to mitigate all attacks, but a way to ensure that the development of exploits is more difficult and expensive.”

Bromium Labs wrote:

    We found that EMET was very good at stopping pre-existing memory corruption attacks (a type of hacker exploit). But we wondered: is it possible for a slightly more technical attacker to bypass the protections offered in EMET? And yes, we found ways to bypass all of the protections in EMET.

According to Bypassing EMET 4.1 [pdf], “Each EMET rule is a check for a certain behavior. If alternate behaviors can achieve the attacker objectives, bypasses are possible.” Not only does the whitepaper give the technical details, it also includes an especially amusing payload bypass message:

DeMott’s custom EMET payload bypasses pwnage message

Back in 2012, DeMott was awarded third place in the BlueHat prize. Microsoft originally planned to give DeMott an MSDN subscription valued at $10,000, but after the crowd loudly booed that prize, Microsoft added $10,000 to the MSDN subscription.

Jared DeMott receiving 3rd place BlueHat Prize

Although DeMott doesn’t suggest anything like “Microsoft killed my Pappy,” the whitepaper does mention that the return oriented programming (ROP) protections from the BlueHat $50,000 second prize winner, which “made it into EMET, do not stop ROP at all. The notion of checking at critical points is akin to treating the symptoms of a cold, rather than curing the cold. Perhaps one of the other prize submissions would have better addressed the problem of code reuse.”

Both that little dig and the pwnage message seemed amusing to me.

Bromium Labs wrote:

    The impact of this study shows that technologies that operate on the same plane of execution as potentially malicious code offer little lasting protection. This is true of EMET and other similar userland protections. That’s because a defense that is running in the same space as potentially malicious code can typically be bypassed, since there’s no “higher” ground advantage as there would be from a kernel or hypervisor protection. We hope this study helps the broader community understand the facts when making a decision about which protections to use.

The researchers made four recommendations: EMET should set virtual memory (Hook NtProtectVirtualMemory) protection by default; “create a new EAF protection scheme (even though that still wouldn’t stop shellcode that doesn’t use EA resolution); check more than one CALL deep to see if code was RETed into; and expand ROP mitigations to 64-bit code.” DeMott added, “But even with those fixes, many of the weaknesses are generic in nature and unlikely to be sufficiently addressed by userland protection technologies like EMET.”

EMET 4.1, which was released in Nov. 2013, supposedly has a setting that is capable of preventing Bromium Labs’ bypasses, according to Jonathan Ness, principal security development manager for Microsoft Trustworthy Computing. “Microsoft collaborated with Bromium on their latest research to ensure continued protection for our customers. The Enhanced Mitigation Experience Toolkit (EMET) 4.1 contains a setting to address this issue and help customers with their ongoing defense-in-depth strategies.”

Microsoft “quietly” just paid its second $100,000 bounty to security researcher Yu Yang on Valentine’s Day, but it doesn’t sound like DeMott will be awarded such a bounty. Instead, Microsoft is supposed to credit Bromium Labs’ research when EMET 5.0 is released. When that might happen, however, is anyone’s guess.

DeMott did add a personal note to Bypassing EMET 4.1 [pdf]. “Though EMET is far from perfect, I personally see Microsoft making more of an effort toward security compared to other large vendors; for that I applaud them.”

Danger! Danger! iOS update may brick your iPhone 5S or iPad Air Before you upgrade to iOS 7.0.6, read this!


By Marc Gibbs

Terrific! Apple’s latest “must-have to ensure your browsing safety” iOS update just bricked my iPhone 5S.

After nagging me to update to iOS 7.0.6 for the last couple of days I finally had a window in which I knew I wouldn’t need my phone so I let the update proceed. Meh.

If you’ve been stuck in the wilderness for a couple of weeks you might not know of the need for this recent release: It fixes a bug in Apple’s support for SSL/TLS that could allow a bad guy to successfully operate a man-in-the-middle attack … a real and serious risk when using a public WiFi network.

Knowing that this update was kinda important I started the process, agreed to the EULA (without reading it, natch), and went off to make a cup of tea. I came back ten minutes later to find my iPhone screen black and no amount of pushing buttons did any good.

If this happens to you, be warned: All of the usual ways of dealing with a bricked iPhone will not work except for connecting it to iTunes. iTunes will tell you that the iPhone is in “recovery mode” and offer to download iOS (wasting anything from an hour to two hours of your time in the process) and then restore the device to factory settings so you get to configure it all over again or restore from a backup. Meh.

It appears from my research that this problem is restricted to the iPhone 5S and iPad Air so if that’s your phone or pad model then before you allow the update do a backup, make sure your iTunes is up-to-date and running it’s on a machine next to you, then cross your fingers, pour yourself a stiff drink, and let the update start.

Why the iWatch won’t measure glucose levels A multitude of reasons why the iWatch will not monitor user glucose levels


It’s widely believed that Apple sometime in 2014 or soon thereafter will introduce what many are calling the “iWatch,” a wearable device capable of tracking all sorts of interesting biometric data.

Over the past few months, Apple has hired a formidable team of biomedical experts with deep experience in medical sensor technologies. Notably, many of the folks now working for Apple have done impressive and groundbreaking work in the realm of continuous glucose monitoring (CGM).

Many news outlets, as a result, have reported that Apple’s rumored iWatch may be able to non-invasively measure a user’s glucose levels. Such a device would be a godsend for diabetics who often have to monitor their blood glucose levels multiple times a day, either by drawing blood from their finger or through an implanted sensor paired with an external monitoring device.

The rumors swirling around the iWatch have grown so unwieldly, the expectations so beyond the realms of modern science, that many are already pegging the iWatch as a revolutionary medical device that will leapfrog competing devices like the Fitbit and the Nike FuelBand by offering unprecedented medical sensor technologies to the masses. Just last week, the San Francisco Chronicle published a report claiming that Apple is researching sensor technologies capable of predicting heart attacks “by studying the sound blood makes at it flows through arteries.”

It’s time to jump back to reality.

Here, we will specifically focus on the the idea that Apple’s iWatch will be able to measure a user’s glucose levels.

Recently, well-connected 9to5Mac blogger Mark Gurman reiterated that the iWatch will, in fact, be able to monitor glucose levels.

    Our knowledge is reliant upon what Apple is programming the Healthbook app to be capable of and based on the company’s recent hires. Our sources today have reiterated that Healthbook is planned to be able to read glucose-related data…

Given Gurman’s impressive track record for accurately breaking Apple news, many outlets have similarly made the leap from “Apple is hiring folks with expertise in continuous glucose monitoring” to “the iWatch will monitor user glucose levels.”

A deeper examination of the issue, however, strongly suggests otherwise.

Non-invasive CGM is an incredibly complex problem that presents a number of challenging medical and technological hurdles. Indeed, medical device companies have been trying to solve this problem for decades, with no real success to speak of.

Apple and C8 Medisensors

Over the past few months, Apple has hired a number of scientists and engineers from C8 Medisensors, an innovative California-based company (now defunct) that was singularly focused on developing a non-invasive CGM device called the HG1-c.

To really gauge the feasibility of an iWatch that monitors glucose levels, it’s helpful to take a deeper look at the HG1-c’s capabilities and limitations. Indeed, doing so brings to light a number of daunting challenges that would arise in bringing the technology to market, let alone embedding it in a wristwatch.

Employing a technology called Raman spectroscopy, the HG1-c was able to measure users’ glucose levels by transmitting a pulse of light through the skin, thereby causing glucose molecules to vibrate. An optic sensor then detected the light reflected off of these molecules, whereupon the device analyzed the resulting “fingerprint” and returned a glucose reading.

The HG1-c was a wearable device intended to be worn across the abdomen. It was impressively compact for the technology it housed, but certainly not small enough to embed inside of a wristwatch. Bear in mind that it also came with a separate battery pack. Together, the device weighed in at 5 ounces, about an ounce heavier than the iPhone 4.

Issue 2: Sunlight

Size aside, the HG1-c carried a number of other limitations that would prevent it from being wrist ready.

On account of the technology it used, the HG1-c was extremely sensitive to sunlight and needed to be cloaked in as much darkness as possible to truly be effective. This, of course, is an obvious deal-breaker for any device meant to be worn on the wrist.

I was able to chat with Charles Martin, a former C8 employee who helped work on the HG1-c’s firmware, who expounded on this in greater detail.

Mr. Martin explains:

    Yes, the camera sensor had to be shrouded in darkness to function. You have to understand that Raman Spectroscopy is looking for a very faint signal emitted by the glucose molecules. A rough analogy: try to pick out someone’s voice in a noisy room. The sunlight was this kind of noise that the camera sensor was not calibrated against. They did try to implement algorithms to discount measurements against sunlight anomalies, but some of the anomaly criteria these algorithms were supposed to detect, overlapped. This made things hard to verify and test on the device.

Also bear in mind that the camera sensor’s performance was affected by variables as innocuous as a user’s body hair and skin color, limitations that certainly don’t bode well for a wearable device designed for the masses.

Issue 3: Physical Activity

Further, the HG1-c wasn’t designed to be worn while partaking in physical activity because the sensor, in order to work accurately, had to be nestled up directly against the skin. And to help provide better optics for the sensor, users of the device were supposed to apply a layer of gel between their skin and the device.

Again, this limitation is an absolute deal breaker for a mass consumer product intended for the wrist.

Issue 4: Battery Life

Another issue that would seemingly preclude C8’s technology from working in a wristwatch, at last for now, pertains to battery life. The HG1-c, which came with a separate battery pack, featured a battery life of 30 hours when taking glucose measurements every 15 minutes and a battery life of 20 hours when taking glucose measurements every 10 minutes.

With reports that Apple is already struggling to attain battery life of  “at least 4-5 days between charges,” it stands to reason that battery-hungry CGM functionality isn’t in the cards.

Achieving the impossible in a few months

So if we’re to believe that an iWatch capable of non-invasive CGM is on the horizon, we’re forced to make a number of lofty assumptions about Apple’s ability to miniaturize the technology, vastly improve its battery life, and address all of the other aforementioned issues that make the device ill-suited as a mass market consumer product. Also keep in mind that the HG1-c was only tested on and intended for individuals over 18 and non-pregnant women.

What’s more, many of Apple’s biomedical and medical hires only joined the company in the last few months. Developing a bonafide medical breakthrough device in such a compressed timeframe runs against all notions of plausibility.

For the sake of discussion, let’s assume that the iWatch only measured glucose levels at the directive of the user. In other words, let’s envision a non-continuous and user-initiated glucose monitor.

Even in this scenario, a multitude of serious issues remain.

To say that monitoring glucose levels via non-invasive means is an extremely complex and challenging task is an understatement on the grandest scale.

A multitude of companies over the course of many decades have tried and failed to tackle this very problem even as it pertains to invasive glucose monitoring, spending untold hundreds of millions of dollars in the process.

That said, C8’s own device, which, again, relied upon Raman spectroscopy, showed much promise. The device even attained CE Mark Approval in Europe in October of 2012.

Nonetheless, the technology still had a long way to go before becoming a viable product.

On this topic, I was able to chat with C8 Medisensor CTO Rudy Hofmeister, who explained to me that while the HG1-c device was a technological feat, it was nowhere ready to being where it needed to be for a consumer device.

    The HG1-c was sort of like a dancing bear; that it worked as well as it did was an absolute miracle, but it was nowhere near where it needed to be in order to be a viable medical product, and certainly not a consumer medical product where the bar is even higher.

    There’s a big difference between performance and efficacy. The device certainly had a level of performance that was statistically significant; it was good sometimes, random sometimes, but not anywhere good enough to be used as a diagnostic or monitoring device for a disease state.

When I inquired about the device receiving CE Mark Approval, Hofmeister explained that this approval was for an early version of the device that wasn’t even manufacturable.

    The CE Mark Approval allows you to sell, but it doesn’t really cover efficacy. So while you can meet the device’s claims, those claims can be so watered down that the product has no value.

While Hofmeister left C8 about a year before operations came to a halt, he remained abreast of what was going on at the company, as the talented folks working there were part of the team he helped assemble.

When I inquired as to how fast the C8 team was making improvements to the device before the company ran out of funding, Hofmeister explained:

    All avenues to increase efficacy were being pursued, but any improvements attained were incremental, and none that would have led to the level of improvement that was needed.

What about a different approach for non-invasive CGM?

Might it be possible for Apple’s team to employ a strategy other than Raman spectroscopy? Of course, but as you’ll read below, no other non-invasive CGM approach – and there have been many – has ever borne fruit.

Fact: Non-invasive glucose monitoring has never been cracked

Underscoring the difficulty, frustration, and complexity associated with non-invasive glucose monitoring, John L. Smith, a leading expert on non-invasive glucose measuring technologies, wrote a book on the pursuit in 2007, updating it a few years later in 2013.

Smith’s seminal book covers, in exhaustive detail, the vast number of companies (including C8) that have experimented with an extensive array of technologies and strategies in what ultimately remains a difficult puzzle that has never seen a marketable solution.

    And in those seven years, I’m sad to say that no technology has yet reached the marketplace, or for that matter, been reliably reported as actually succeeding in laboratory or clinical testing. I have personally looked at perhaps another two dozen technologies (but can’t discuss many of these newer ones, due to confidentiality agreements), been intrigued by a few and disappointed in most others. Some new companies have joined the quest, and many others have reached the end of their participation.

The 2013 version of the book concludes:

    As in the attempts detailed here, the horizon will continue to be clouded by spurious correlation, incomplete understanding of the sources of error, lack of rigorous evaluation of results and wishful interpretation of data. Unlike the cure for cancer, where partial success has been achieved in many areas, this one still seeks a breakthrough. It is hoped that the attempts detailed here will help to prevent others from repeating past mistakes and premature announcements, but a rational assessment would suggest that many more lie ahead.

In short, measuring glucose levels via non-invasive means has never been proven to successfully work in any product to hit the market. Solving any problem that lies at the intersection of medicine and technology is ridiculously tough. Doing so with a wearable mass-consumer mobile device only adds many more layers of complexity.

To believe that Apple, with a team that was mostly assembled in mid-2013 and onwards, will soon be able to crack this nut is patently absurd.

Apple would prefer to steer clear of FDA approval and Medical Audits

Even if we hypothetically assume that Apple and it’s impressive all-star team of scientists and engineers can successfully pull off the impossible, miniaturize the device, address a number of usability issues, and improve upon its efficacy in a significant and groundbreaking way, such a device would still require FDA approval and a whole gamut of oversight that Apple isn’t accustomed to.

What’s more, securing FDA approval would entail extensive and careful clinical trials, with an approval process that could last as long as 18 months. In essence, entering the world of medical devices would require Apple to jump through hoops it traditionally takes pains to avoid.

On this topic, I was able to get in touch with a former C8 employee (who wished to remain anonymous) who explained some of the inherent regulatory hurdles associated with developing medical devices:

    Any company that wants to sell a medical device must adhere to a regulated auditing process. The C8 product was probably as non-intrusive of a device as you can get. It was not categorized as an “implantable” device. At worst, a patient would be exposed to a laser light source (no stronger than a typical laser pointer).

    C8 had to follow all sorts of procedures to ensure it would successfully pass an ISO 13485:2003 audit. For example, engineers couldn’t file bugs with the existing bug-tracking tool, because the bugs were fair game for an auditor. This process was very time-consuming…

    I don’t think Apple is the type of company that would be comfortable working through an full medical device audit process. I don’t think it would fit within the company’s technology culture either (having worked there for a time).

An iWatch that measured other biometric data while steering clear of glucose measurements would undeniably fly through the watchful eyes of the FDA much more quickly.

Apple’s success is rooted in its ability to deliver incredibly polished products that do a few things extraordinarily well. Apple’s laser-like focus is why iOS didn’t receive copy-and-paste functionality until iOS 3 and multitasking support until iOS 4.

So even if we assume that CGM is on Apple’s iWatch roadmap, the notion that this feature will appear in the first iteration of the device is highly improbable. Simply because Apple hired an impressive team of biomedical and sensor technology experts doesn’t mean the first iteration of the iWatch will be a magical, all-knowing sensor device.

Apple’s initial efforts to develop an in-house mapping solution provides an illustrative example.

Apple first began putting together its internal mapping team in July of 2009 when it acquired a geo-mapping company called Placebase. And yet, Apple’s standalone Maps app wasn’t released until September 2012, a full three years later. What’s more, many of the innovative features that made Placebase’s mapping technology so unique have yet to re-appear in iOS.

That said, Apple’s hiring spree of sensor experts shouldn’t reflexively be construed to mean that the iWatch will monitor a myriad of health vitals out of the gate. Apple is a remarkably patient company, and again, keep in mind that many of Apple’s biomedical and medical sensor hires have barely been at the company for six months.

Remember the backlash over Apple Maps? Now imagine how amplified that would be if Apple released a faulty device purporting to measure a serious health vital like glucose levels. Apple wouldn’t even consider such a feature unless it was supremely confident that the technology worked flawlessly.

Apple’s MO is simple – it methodically adds features to its products, slowly but surely improving its product line with each successive release. So while it stands to reason that Apple has a lot of  brilliant folks working on incredibly innovative sensor technologies, there’s no strong evidence to suggest, or historic evidence that would have us reasonably conclude, that the initial version of the iWatch will be an advanced sensor supermachine.

Keeping iWatch expectations realistic

As a result, I think there’s a lot of merit in MobiHealthnews writer Brian Dolan’s assertion that the iWatch’s “technological capabilities will be simpler than rumors have indicated.”

As is Apple’s style, expect it to measure a number of health-realted variables exceptionally well as opposed to measuring every conceivable vital sign under the sun. Indeed, Dolan’s own sources relayed that Apple’s recent hires, at least for now, are there to “ensure that the health sensing capabilities of the device” are accurate.

Former C8 CTO Rudy Hofmeister concurs on this point, arguing that any wearable device Apple releases will likely focus on “general health and fitness” rather than medical vitals.

So can we expect the iWatch to measure glucose levels? Don’t bet on it.

Addressing rumors of Apple’s alleged pursuit of non-invasive CGM, John L. Smith writes:

    The participation of funding by big companies with no experience in glucose monitoring is sometimes pejoratively called “dumb money.” In the same way that inventors can become enamored by the prospect of helping people with diabetes (and coincidentally “cashing in” on the result), companies like GE and Motorola have made what turned out to be unwise investments in this area. Apple and Samsung, and possibly Google might be on the same trail, trying to create a watch that measures glucose noninvasively.

    One way to see who else is interested in noninvasive glucose is to see where the technical principals go after a company shuts down. An Apple-watching blog, “9 to 5 Mac” reports that Apple hired several experts in the field of non-invasive blood monitoring sensors from C8 MediSensors, and also hired employees who had worked at Senseonics and InLight Solutions. Time will tell if this turns out to be a fruitful pursuit for them.

Wearable technologies down the road

The technological and usability advancements seen from the original iPhone to what we have now with the iPhone 5s are nothing short of astounding. This, of course, goes back to Apple’s penchant for slow, careful, and measured improvements.

So while the iWatch at first glance may not be the medical marvel some are understandably hoping for, what’s truly exciting is that Apple, according to a bevy of circumstantial evidence, is seemingly focused on a brand new product category.

Indeed, Apple’s interest in wearables is hardly a well-kept secret. Tim Cook last year told Kara Swisher and Walt Mossberg that wearable technologies is a profoundly interesting space “ripe for exploration.” Cook also added that the “whole sensor field is going to explode. It’s a little all over the place right now. With the arc of time, it will become clearer.”

Coupled with Apple’s  formidable team of biomedical engineers and medical sensor experts, the iWatch may prove to be the first step in what will one day, but not at first, be a revolutionary device.

10 amazing facts about the world’s largest radio telescope


The massive effort that will ultimately bring together the world largest radio telescope got a little more focused this week.  The organization running the venture picked some three hundred and fifty scientists and engineers, representing 18 nations and drawn from nearly one hundred institutions, universities and industry to complete the design phase of the Square Kilometer Array (SKA) Project.

The SKA telescope will include 3,000 dish-shaped antennae and other hybrid receiving technologies dishes spread over a collecting area of about 3,000 kilometers, making it 50–100 times more sensitive than today’s best radio telescopes and cover the frequencies 0.15 to 30 GHz (2 m to 1 cm wavelength).

The $2 billion SKA project once operational, will be used to address some of “humankind’s greatest questions, such as our understanding of gravity, the nature of dark energy, the very formation of the Universe and whether or not life exists elsewhere,” the group says.

The announcement this week included the formation of an array of groups that will oversee various key development areas of the SKA.  For example, The Dish Consortium, led by Dr. Mark McKinnon of Commonwealth Scientific and Industrial Research Organization in Australia will oversee all activities necessary to prepare for the procurement of the SKA dishes, including local monitoring & control of the individual dish in pointing and other functionality, their feeds, necessary electronics and local infrastructure. DSH includes planning for manufacturing of all components, the shipment and installation on site of each dish and the acceptance testing.  Other groups like the he Low Frequency Aperture Array Consortium will manage the development of antennas, on board amplifiers and local processing required for the Aperture Array telescope of the SKA.

As SKA development continues, its developers put up a list of the radio telescope’s “most amazing” facts as they see them.  The amazing facts list looks like this:

    The data collected by the SKA in a single day would take nearly two million years to playback on a typical MP3 player.
    The SKA central computer will have the processing power of about one hundred million PCs.
    The SKA will use enough optical fiber linking up all the radio telescopes to wrap twice around the Earth.
    The dishes of the SKA when fully operational will produce 10 times the global internet traffic as of 2013.
    The aperture arrays in the SKA could produce more than 100 times the global internet traffic as of 2013.
    The SKA will generate enough raw data to fill 15 million 64 GB MP3 players every day.
    The SKA supercomputer will perform 1018 operations per second – equivalent to the number of stars in three million Milky Way galaxies – in order to process all the data that the SKA will produce.
    The SKA will be so sensitive that it will be able to detect an airport radar on a planet 50 light years away.
    The SKA will contain thousands of antennas with a combined collecting area of about one square kilometer (that’s 1,000,000 square meters).
    Analysts estimate the London Olympics was the most data-heavy event in recent history – with some 60 Gbytes, the equivalent of 3,000 photographs, travelling across the network in the London Olympic Park every second. This however is only equivalent to the data rate from about half of a single low frequency aperture array station in SKA phase one.

How The Syrian Electronic Army Hacked Forbes A Detailed Timeline of the incident


Early Thursday morning, a Forbes senior executive was woken up by a call from her assistant, saying that she’d be working from home due to a forecast predicting the snowiest day of the year. When she ended the call, the executive saw on her Blackberry that she had just received a bluntly worded email that seemed to have been sent by a reporter at Vice Media, asking her to comment on a Reuters story linked in the message.

Any other time, she says she would have waited to read the linked story later at the Forbes office. But with the sale of the 96-year-old media company pending, she was on the alert for news. Groggily stepping out of bed, she grabbed her iPad, opened the email in her Forbes webmail page through a shortcut on the device’s homepage and tapped the emailed link.

In her half-asleep state, she was prompted for her webmail credentials and entered them, thinking her access to the page had timed out. When the link led to a broken url on Reuters’ website, she got dressed and began her snowy commute from Brooklyn to Manhattan without a second thought. “It was so insidious,” she says. “I didn’t know I had been hacked for another two hours.”

fact, the phishing email had set in motion a two-day cat-and-mouse game with Syrian Electronic Army (SEA) hackers who would deface the Forbes website and backend publishing platform, attempt to post market-moving news, steal a million registered users’ credentials, and briefly offer them for sale before leaking the data online.


Update: Forbes’ Kashmir Hill has interviewed a spokeperson for the SEA who offered some further explanation of the motivations and methods behind the attack.

Compared with the Chinese attack that penetrated the New York Times in 2012 or  the cybercriminal theft of millions of credit card numbers from Target late last year, the SEA attack of Forbes doesn’t seem to have been technically complex. But the hackers were nonetheless clever and persistent enough to stay a step ahead of the media company’s security measures. A week later, Forbes staff still haven’t entirely ended a partial email and publishing lockdown designed to prevent the attackers from breaching the site again and limit the damage if they do regain access.

rbes’ chief product officer Lewis Dvorkin has already shared some details of the attack along with his thoughts on the incident. On Wednesday morning, users were again allowed to log in to the Forbes site and required to choose new, stronger passwords.


But in the interest of transparency–and out of a sense that we should subject ourselves to the same journalistic scrutiny as the subjects of our stories–fellow reporter Kashmir Hill and I have assembled a timeline based on our experience of the hack, as well as interviews with those staffers who were willing to speak with us.

Here’s what we’ve learned, with approximate times marked:

Thursday, 6:15am: A Forbes senior executive received a phishing email from a compromised Vice Media email account with a link to a fake Reuters story about Forbes. The link led to a spoofed webmail login where she shared her email credentials. (I reached out to Vice to ask about the possible compromise of the company’s email, but didn’t get a response.)

7:45am: The senior executive’s hacked account was used to send a second round of phishing emails to Forbes staffers, again asking them to check out a supposed news story about Forbes. A Forbes editorial staffer working from home who had disregarded the earlier, more suspicious-looking phishing attempt was duped by this second round of emails. “The imprimatur of [the senior executive] suggested something was actually going on here,” he says. “I’ve been kicking myself black and blue over this.”

The editorial staffer, who had “super-administrator” privileges on Forbes’ WordPress publishing platform, entered his email credentials into a fake webmail login page. When the link took him to an old NBC News story, he realized he’d been phished and alerted the Forbes IT department, who reset his email credentials.

8:15am: A Forbes IT administrator sent out a warning to staffers about the phishing attempts.

10:00am: A financial reporter fell for a stealthier version of the phishing attempt. As he describes it, he clicked on the link in the Vice phishing email but didn’t enter his email credentials. When he returned to WordPress to continue blogging, however, he was prompted again to log in again. Two new posts appeared on his blog almost immediately. One read, “BREAKING: US Treasury declares all foreign T-bills void. Yellen to hold a press conference in 15 minutes,” and another, “Yellen to press: ‘We can no longer tolerate China’s currency manipulation.’”

When we ran this series of events by web hacking expert and Whitehat Security founder Jeremiah Grossman, he speculated that the phishing email link performed an attack known as a Cross-Site Request Forgery that hijacked the reporter’s browser to post the stories. Forbes staff say now that they haven’t ruled out the possibility that malware may have also been installed on his machine, though Grossman says he doubts this is the case.

In less than five minutes, an editor spotted the fake news stories and took them down.

In her interview with a spokesperson for the Syrian Electronic Army, my colleague Kashmir Hill was told that the fake headlines were an attempt to divert Forbes’ attention while they burrowed deeper into the site.

10:15am: The Forbes operations staff decided to lock users out of WordPress until they could address the compromise of the site. “We realized that this had escalated and become a real problem,” says Forbes chief operations officer Mike Federle. “We jumped into code red.”


Over the next hours, they reset the credentials of all Forbes users with super-administrator privileges along with any other users who said they’d fallen for the phishing scheme, notifying them of their new credentials one-by-one in person or over the phone to avoid using email after the previous phishing schemes.

6:00pm: The site was reopened to users.

7:00pm: The hackers changed a headline on social media editor Alex Knapp’s blog page to “The Syrian Electronic Army Was Here.” Although the Syrian Electronic Army later wrote on their Twitter feed that their entire attack could be blamed on Knapp, this seems to have been misdirection. The defacement of his page was performed using the same editorial staffer’s super-administrator account that had been first compromised that morning. In the short time before his email credentials were changed by the Forbes IT staff, the hackers had gained access to the editorial staffer’s high-privilege WordPress account by exploiting WordPress’s “forgot password” function and resetting his publishing account password from his compromised email inbox.

In fact, Forbes staff now believe the hackers may have used their initial access to the super-admin WordPress account to change both the email address and the social networking accounts–such as Linkedin, Twitter, Google+, and Facebook–associated with it. So despite the editorial staffer’s WordPress credentials being changed earlier in the day, they were able to quickly regain access to the account by again triggering the “forgot password” function and accessing the reset email sent to their own account.


10pm: The site was again locked down to prevent further compromise. After discovering that the hackers had changed the email addresses associated with compromised users’ WordPress accounts, Forbes staff changed back to the users’ addresses.

Midnight: The site was reopened to users.

Friday, sometime between 12:30am and 3:30am: The hackers again accessed the same editor’s super-administrator account on WordPress, possibly taking advantage of his altered social logins. Though Forbes staffers had fixed the email address associated with the account, they say they may not have changed the social accounts connected with it.

3:30am The hackers used the editor’s account to deface the blog pages of six more Forbes staffers–including mine–with the phrase “Hacked By The Syrian Electronic Army.” Some of these staffers had linked their Twitter accounts with their WordPress accounts, so that the SEA message also appeared on their personal Twitter feeds.

3:40am: The site was locked down for a third time.

7:30am: After social logins were disabled, the site reopened.

8:00am: Using a method that’s still not clear, the hackers regained access yet again to the editor’s account–possibly by exploiting a vulnerability in a WordPress plugin that allowed them to insert malicious code into the site. They changed the Forbes WordPress installation theme, inserting their own logos and a Syrian flag designed from ones and zeroes. At some point, they also inserted code into the top post linked on the site’s homepage so that it redirected thousands of users to the SEA Twitter feed.

11:30am: Forbes administrators were forwarded an email from a hacker named Ethical Spectrum that had been sent to seemingly random staffers earlier in the day. The message said he or she had stolen the entire Forbes database of registered usernames, emails, and passwords, and went on to demand what may have been a ransom. Just how the data was stolen isn’t exactly clear, but WordPress does allow users with super-administrator privileges to export the full user database.


The message from Ethical Spectrum, who also took credit for an attack on video game company Supercell earlier this month, read as follows:

    Hello Forbes. I found gabs in your servers thats allowed me to download all your databases. i can help you to avoid this again. but i want something in return like fees. the proof that i hacked your databases is this screenshot. its only 1 million user. NOTE: I have some roles. ROLE NUMBER 1. Do not delay in reply.

It was followed by a screenshot showing a few users’ credentials and passwords, which WordPress had cryptographically hashed to make them unreadable.

At this point, Forbes administrators locked down the site again and called the FBI.

When we contacted Ethical Spectrum for comment, he claimed he wasn’t associated with the Syrian Electronic Army, and had only learned of the attack from the Syrian Electronic Army’s Facebook page. The Syrian Electronic Army spokesperson also claimed that the group wasn’t associated with Ethical Spectrum.

12:35pm: The Syrian Electronic Army announced on its Twitter feed that it had hacked Forbes. It later wrote that it had gained access to the million-user database, and asked for bids from possible buyers before declaring that it would release the hacked usernames, emails and hashed passwords for free. It published the database Friday night.

IBM: Prototype device supports 400 Gb/s data transfer speeds IBM analog-to-digital converter aims at big cloud, data center transfers


IBM researchers say they have developed a prototype analog-to-digital converter (ADC) that will quadruple the transfer speeds — 200 – 400 Gigabits per second (Gb/s) — of huge data dumps between clouds or data centers.

IBM says the ADC could download 160 Gigabytes, the equivalent of a two-hour, 4K ultra-high definition movie or 40,000 songs, in only a few seconds.

IBM said the device is a lab prototype, but noted that a previous version of the design has been licensed to Semtech Corp, which will be incorporating the technology into communications platforms expected to be announced later this year.

Big Blue says The 64 GS/s (giga-samples per second) chips for Semtech will be manufactured at IBM’s 300mm fab in East Fishkill, New York in a 32 nanometer silicon-on-insulator CMOS process and has an area of 5 mm2. This core includes a wide tuning millimeter wave synthesizer enabling the core to tune from 42 to 68 GS/s per channel with a nominal jitter value of 45 femtoseconds root mean square. The full dual-channel 2×64 GS/s ADC core generates 128 billion analog-to-digital conversions per second, with a total power consumption of 2.1 Watts., IBM stated.

An ADC converts analog signals to digital, approximating the right combination of zeros and ones to digitally represent the data so it can be stored on computers and analyzed for patterns and predictive outcomes, IBM says.

For example, IBM said scientists will use hundreds of thousands of ADCs to convert the analog radio signals that originate from the Big Bang 13 billion years ago to digital. It’s part of a collaboration called Dome between ASTRON, the Netherlands Institute for Radio Astronomy, DOME-South Africa and IBM to develop a fundamental IT roadmap for the Square Kilometer Array (SKA), an international project to build the world’s largest and most sensitive radio telescope.

The radio data that the SKA collects from deep space is expected to produce 10 times the current global internet traffic and the prototype ADC would be an ideal candidate to transport the signals fast and at very low power – a critical requirement considering the thousands of antennas which will be spread over 3,000 kilometers (1,900 miles). As another way of looking at what SKA will generate, it is expected to turn out enough raw data to fill 15 million 64 GB MP3 players every day.

The device was presented at the International Solid-State Circuits Conference in San Francisco this week.

New planet hunter with 34 telescopes to set sights on deep space


European Space Agency this week said it was putting together a new space telescope that would take aim at discovering habitable exoplanets in our solar system.

By integrating 34 separate small telescopes and cameras, the Planetary Transits and Oscillations of stars, or PLATO, will be parked about 1.5 million km beyond Earth and monitor what the ESA called “relatively nearby stars, searching for tiny, regular dips in brightness as their planets transit in front of them, temporarily blocking out a small fraction of the starlight.

The PLATO mission, which wouldn’t launch until 2024, will measure the sizes, masses, and ages of the planetary systems it finds, so detailed comparisons with our Solar System can be made.

“In the last 20 years more than one thousand exoplanets have been discovered, with quite a few multi-planetary systems among them,” said mission leader Dr Heike Rauer at DLR, the German Aerospace Center.  “But almost all of these systems differ significantly from our Solar System in their properties, because they are the easiest-to-find examples. PLATO firmly will establish whether systems like our own Solar System, and planets like our own Earth are common in the Galaxy.”

PLATO will use an array of telescopes rather than a single lens or mirror. PLATO will use high quality cameras, and will have the advantage of observing continuously from space, without the interruption of sunrise, or the blurring caused by the Earth’s atmosphere, the ESA stated.

Its position will let PLATO discover planets smaller than Earth, and planets at distances from their host stars similar to the Earth-Sun distance. So far, only a few small exoplanets are known at star-planet distances comparable to or greater than Earth’s. Unlike previous missions, PLATO will focus on these planets, which are expected to resemble our own Solar System planets, the ESA stated.

The mission sounds most like NASA’s successful Kepler space telescope which has catalogued some 3,583 planet candidates. Recently released analysis led by Jason Rowe, research scientist at the SETI Institute in Mountain View, Calif., determined that the largest increase of 78 % was found in the category of Earth-sized planets. Rowe’s findings support the observed trend that smaller planets are more common, NASA stated.

But Kepler has been out of commission since May 2013 with technical problems.  Currently NASA and Ball Aerospace engineers say they have developed a way of recovering Kepler and tests to repurpose the craft are ongoing.

The ESA pointed out some interesting factoids about the PLATO mission:

    During its six year long planned mission, PLATO will observe one million stars, leading to the likely discovery and characterization of thousands of new planets circling other stars. PLATO will scan and observe about half the sky, including the brightest and nearest stars.

    The satellite will be positioned at one of the so-called Lagrangian Points , where the gravitational pull of the Sun and the Earth cancel each other out so the satellite will stay at a fixed position in space. Each of the 34 telescopes has an aperture of 12 centimeters.
    The individual telescopes can be combined in many different modes and bundled together, leading to unprecedented capabilities to simultaneously observe both bright and dim objects.
    PLATO will be equipped with the largest camera-system sensor ever flown in space, comprising 136 charge-coupled devices (CCDs) that have a combined area of 0.9 square meters.
    The accuracy of PLATO’s astroseismological measurements will be higher than with previous planet-searching programs, allowing for a better characterization of the stars, particularly those stellar-planetary configurations similar to our Solar System.
    The scientific objective is based on previous successful projects, like the French-European space telescope CoRoT or NASA’s Kepler mission. It will also take into account the mission concepts that are currently under preparation which will “fill the gap” between now and PLATO’s launch in 2024 – NASA’s Transiting Exoplanet Survey Satellite (TESS) mission and ESA’s ChEOPS mission.

High-profile US national labs team to build 200 petaflop supercomputers


Three principal US national labs today affirmed they will team-up to build supercomputers that operate about 10 times faster than today’s most powerful high performance computing (HPC) systems.

The project, known as the Collaboration of Oak Ridge, Argonne and Livermore (CORAL) national labs will build 200 peak petaflops (quadrillions of floating point operations per second) systems for each of the labs, at a cost of about $125 million each, in the 2017-2018 timeframe, the group stated.

The collaboration sprung from the fact that the labs will all likely be replacing their current supercomputers – Argonne’s Mira, Livermore’s Sequoia and Oak Ridge’s Titan, at almost the same time.

A joint Request for Proposals for the CORAL procurement was issued Jan. 6 and responses were submitted Feb. 18.  Responses to that request are now being evaluated and the plan is that CORAL partners will select two different vendors and procure a total of three systems, two from one vendor and one from the other. Livermore is leading the procurement process, the group stated.

According to a statement, Livermore’s system, to be called Sierra, will be best suited to support the applications critical to nuclear stockpile stewardship. Oak Ridge and Argonne will employ systems that meet the needs of their DOE Office of Science missions which includes all manner of applications from climate change and energy development to advanced manufacturing and national security.

In the draft of technical requirements of CORAL, written last August, the group wrote:…scientific computation cannot yet do all that we would like. Much of its potential remains untapped-in areas such as materials science, earth science, energy assurance, fundamental science, biology and medicine, engineering design, and national security-because the scientific challenges are too enormous and complex for the computational resources at hand. Many of these challenges have immediate and global importance.

These challenges can be overcome by a revolution in computing that promises real advancement at a greatly accelerated pace.

Planned pre-exascale systems (capable of 1017 floating point operations per second) in the next four years and exascale systems (capable of an exaflop, or 1018 floating point operations per second) by the end of the decade provide an unprecedented opportunity to attack these global challenges through modeling and simulation. Data movement in the scientific codes is becoming a critical bottleneck in their performance. Thus memory hierarchy and its latencies and bandwidths between all its levels are expected to be the most important system characteristic for effective pre-exascale system

Google Glass backlash escalates to violence A Google Glass user was physically assaulted and robbed for wearing the device in a bar this week


I’ve written in the past about the less-than-stellar reception Google Glass has received among the population of people who don’t have access to the high-tech eyewear, or just aren’t interested in a head-worn technology that could record video of passersby without their knowledge.

This week, the backlash turned to violence for one San Francisco tech writer who brought her Google Glass unit to a bar.

Sarah Slocum, whose LinkedIn profile lists her as a contributing editor at Newsdab, said in a Facebook post that she was assaulted by two women at a San Francisco bar after initially showing other patrons how the device works. While being antagonized by the women, whom CBS San Francisco reports were part of a group of bar patrons expressing concern about being recorded with the device, Slocum recorded the beginning of the attack and posted the video to YouTube.

Shortly thereafter, a male patron reportedly stole her Google Glass device off her face, and when she chased after him, two other men stole her wallet and cellphone, Slocum explained in a Facebook post.

Brian Lester, a witness who spoke to CBS San Francisco, says the issue escalated when a male friend of Slocum physically attacked another male patron who made fun of her for wearing Glass. A major contributing factor to the confrontation was the nature of the crowd at the bar, according to Lester.

“You know, the crowd at Molotov’s is not a tech-oriented crowd for the most part,” Lester told CBS San Francisco. “It’s probably one of the more punk rock bars in the city. So you know, it’s not really Google Glass country.”

Lester did clarify that Slocum did not deserve to be physically attacked just for bringing a high-tech toy to a punk rock bar. But that above quote kind of underlines a major issue Glass users face. Glass is hardly any different, in terms of functionality, than a smartphone. People largely just aren’t prepared to see one that’s worn as eyewear, especially at closing time at a dive bar.

Making the situation worse are continued rumors that Glass will use facial recognition to find information about people as they walk down the street. These kinds of reports are perpetuated by companies like FacialNetwork, which developed a facial recognition app for Google Glass while fully aware that it violates the Glass developer policy. The people behind NameTag, which appears to work pretty well, have said that even if Google doesn’t change its policy regarding facial recognition, they will look for competitors that will.

This is a problem with no clear end in sight. One part of the population is so excited about the progression of mobile technology that they move forward with complete disregard for privacy concerns. While violence against Glass users (hopefully) won’t become commonplace anytime soon, the other side isn’t just going to change their minds and warm up to the technology.

Gun-grabbers back for more….


As expected, the gun-grabbers in the legislature just couldn’t help themselves.

Despite losing three of their gun-grabbing colleagues in the Senate, the anti-gun cabal in Denver has deluded themselves into believing that Colorado citizens wouldn’t mind more gun control being rammed down their throats.

Just like last year, the gun-grabbers are on the move — thinking of every conceivable way (including sneaking gun-control through) to strip us of all our rights.


This time, the gun-grabbers are doing it under the guise of so-called “mental health” legislation.

Yesterday, an 81-page “mental health” bill was fast-tracked through the Health, Insurance & Environment Committee on a 7 to 4 party-line vote.

To the general observer, this bill might look harmless, but remember the road to hell is paved with good intentions, and the prime sponsors of this bill are two of the worst gun-grabbers in the legislature — Representative Beth McCann and Senator Linda Newell.

This shouldn’t be a surprise. Rep. McCann has been out to destroy our Second Amendment rights for years.

You may remember last year when she sponsored all of the anti-gun legislation that was forced into law.


Make no mistake, this bill would permanently strip hundreds of thousands of Coloradans of their inherent right to self-defense.

Anyone who seeks voluntary residential treatment, for any amount of time, and is diagnosed with a TEMPORARY MOOD DISORDER such as depression, social anxiety, or even postpartum depression, would be an inch away from being labeled as a second-class citizen.

If this bill becomes law, if you are in treatment or having a rough go of it, odds are you will lose your gun rights without due process.


Instead of having the protection of a jury trial of your peers, a single, anti-gun judge could decide the fate of your rights.

Clearly, no one should have that kind of unchecked power, and this authority was never imagined by our founding fathers.


This legislation should raise a red flag in the mind of every gun owner in Colorado.

This bill (HB 14-1253) is now heading to the floor of the Colorado House of Representatives, and gun owners need to mobilize a full-court press to stop this bill.


It’s imperative you and I stop this bill dead in its tracks, because if it reaches the State Senate, you can be sure anti-gun partisan and bill sponsor Linda Newell will pull out all the stops to put it on Hickenlooper’s desk.

Any “mental health” legislation that the gun-grabbers propose has one purpose: stripping more honest, law-abiding citizens of their gun rights.


While this bill contains a lot of fluff and red tape, the truth is, this bill IS the greatest threat to our Second Amendment rights this year.

Contact your legislators and tell them to vote NO on HB 14-1253, and not to slip into McCann and Newell’s political mousetrap.


Find your legislator by clicking here.

Remember, this bill could arrive on the floor of the House any day now, so don’t hesitate to call them!

For Freedom,

Dudley Brown
Executive Director


P.S. The anti-gun cabal in Denver is at it again!

This time, the gun-grabbers are pushing for more gun-control under the guise of “mental health” legislation.

Make no mistake, this bill would permanently strip hundreds of thousands of Coloradans of their inherent right to self-defense.


While this bill contains a lot of fluff and red tape, the truth is, this bill IS the greatest threat to our Second Amendment rights this year.

Contact your legislators and tell them to vote no on HB 14-1253, and not to slip into McCann and Newell’s political mousetrap.

Find your legislator by clicking here.

$615,000 Mercedes-Benz G63 AMG 6×6 is rolling, rocking, and rumbling its way through New York!


Hide the kids and bring the dog inside, the Mercedes-Benz G63 6×6 is loose! This 9,000 lb. SUV has six wheels, stretches 19 feet in total length, and rides on 37-inch tires. If you’re convinced a zombie apocalypse is coming, or if you really want people to get out of your way during a morning commute, this is the truck for you!

We’d be really excited, if we weren’t a little bit scared too.

Standing nearly 8-feet tall and stretching 19-feet in total length, the six-wheeled Mercedes-Benz G63 AMG 6×6 is like nothing else on the road.


A 536-horsepower biturbo V-8 engine hauls the G63 AMG 6×6 from zero to 60 mph in 7.8 seconds, according to Mercedes-Benz’ stopwatch.


And the fact Mercedes-Benz announced via Facebook that this behemoth is rolling around New York and New Jersey has us scratching our heads (and looking over our shoulders).


Six wheels and 37-inch tires mean ground clearance is, ern, most definitely not going to be a problem.

Is the German automaker planning on bringing this beast stateside? Only a handful were supposedly going to be built.


Unless you’re driving a dump-truck or city bus, this is guaranteed to be your view of the G63 AMG 6×6 – if you’re brave enough to get in its way!

And with a price that crests $600,000, we’re not sure who the target audience might be.

Frustrated SLS AMG Gullwing fans who long for a Peterbilt tractor-trailer?


How many owners do you think are going to head to Home Depot? Yea, we doubt it too. But if Mercedes tosses us the keys, we’re making the ultimate run to IKEA!

You can bet we’ll stay hot on the (XXL-sized!!!) heels of his mega machine as it thunders through the New York area!


Mercedes let the secret out that the G63 AMG 6×6 is rumbling through the New York area. Call it a really solid educated guess, but we think this all has something to do with the impending debut of the C63 AMG sedan (seen here parked next to the almighty G63!).


This week also happens to kick off Mercedes-Benz Fashion Week here in NYC. Hey, if you’re going to make an impression on the catwalk, you might as well go BIG.


Mercedes had planned on only building a handful of these outrageous beasties. Could U.S. sales now be around the corner for the G63 AMG 6×6?


The G63 AMG 6×6 has five, count ’em, five locking differentials. This thing makes a Jeep Wrangler look like a Tonka toy.

By Jarrett Neil Ridlinghafer
Chief Technology Analyst
Compass Solutions, LLC

Lamborghini Nitro tractor gets the work done in hypercar style


The new Lamborghini Nitro tractor pushes Italian style, giving the utility of the tractor that extra Lamborghini edge. This is no hypercar, however, with engines topping out at 130 horsepower.

It’s the tractor you don’t want to get dirty.

Today, the name may evoke images of a fast-racing Gallardo, but Lamborghini’s beginnings were about as far from the race track as possible.


Lamborghini Trattori, the tractor-manufacturing arm of the hypercar maker, shows of its latest farm tool. The 2013 Lamborghini Nitro combines style with the rugged functionality, looking just as ready to roll out onto the moon as it does onto the construction site.

With exterior styling designed by Giugiaro Design, the Nitro looks undeniably slick with its shiny white body. LED headlights, a standard in the hypercar world, make their way into this super tractor, as well as a high-tech cabin that looks to be inspired in equal parts by a race car and an airplane. And, yes, “Lamborghini” is emblazoned on the driver’s seat headrest, just in case anyone forgets.


s with any Lamborghini, form and function come hand-in-hand. The Nitro’s honeycomb grille is seriously eye-catching, while it also helps to keep the tractor’s 3.6-liter DEUTZ engine running cool and efficient.

The Nitro’s DEUTZ engine will be available in four power levels ranging from 100 to 130 horsepower while buyers will be able to also pick between mechanical, Powershift, and variable ratio transmissions.

Lamborghini has not yet announced the price for the 2013 Nitro.

By Jarrett Neil Ridlinghafer
Chief Technology Analyst
Compass Solutions, LLC

Big-Data Comes to the Farm – Farmers fear Monsanto is collecting too much crop data


Big data has come to the farm. The world’s two largest seed sellers, Monsanto and DuPont, are building “prescriptive planting” technology that will take in detailed data from farmers and spit out precise guidelines for planting. The upside is that farmers can use the algorithmic advice to easily identify things like the best soil for the best seeds, the amount of fertilizer needed, and optimal density for planting.

Deere tractors beams data directly to DuPont and Dow Chemical


Some farmers and agricultural organizations are worried about the amount of control the industry is ceding to megacorporations, however. Farmers today rely heavily on algorithms and iPads to automate their planting, and that data is easily harvested. Deere even signed a contract to beam data directly from its tractors to DuPont and Dow Chemical, reports The Wall Street Journal. Furthermore, the new technology could price struggling small farmers out of business.


There are also fears that the data services will be used to convince farmers to plant more and therefore buy more seeds. Farmers are also concerned that the data could be used on Wall Street to inform price projections, cutting into their profit on futures contracts. “I’m afraid, as farmers, we are not going to be the ones reaping the benefit,” one farmer told WSJ.

By Jarrett Neil Ridlinghafer
Chief Technology Analyst
Compass Solutions, LLC

Mt. Gox & $350 Million Dollars of Investors Money Disappears as Bitcoin community goes into damage control mode


The embattled Bitcoin exchange Mt. Gox has gone offline, after several organizations from the Bitcoin community released a joint statement distancing themselves from the Tokyo company’s troubles. Mt. Gox’s website remains inaccessible, and the exchange appears to have deleted its entire Twitter feed.

The joint statement was originally billed as “regarding the insolvency of Mt. Gox,” but was later updated to remove that language. A spokesman for the group told Recode, however, that “Mt. Gox has confirmed it will file bankruptcy in private discussions with other members of the bitcoin community.” Mt. Gox did not respond to requests for comment from The Verge.

“As with any new industry, there are certain bad actors that need to be weeded out.”

An unverified document purporting to show Mt. Gox’s “crisis strategy” alleges that the exchange has lost over 744,000 bitcoins in a theft dating back several years. The theft is said to have been enabled by the malleability bug that caused Mt. Gox to halt all withdrawals earlier in the month, though the document’s authenticity cannot yet be confirmed.

“This tragic violation of the trust of users of Mt. Gox was the result of one company’s actions and does not reflect the resilience or value of Bitcoin and the digital currency industry,” read the joint statement, backed by companies including Coinbase, Blockchain, Bitstamp, Kraken, Circle, and BTC China.


“It looks like the company has finally collapsed.”

“There are hundreds of trustworthy and responsible companies involved in Bitcoin. These companies will continue to build the future of money by making Bitcoin more secure and easy to use for consumers and merchants.  As with any new industry, there are certain bad actors that need to be weeded out, and that is what we are seeing today.”

Mt. Gox CEO Mark Karpeles stepped down from the Bitcoin Foundation’s board on Sunday, but has otherwise not been heard from in recent days. “It looks like the company has finally collapsed,” said Kolin Burges, the Bitcoin trader who started protesting outside Mt. Gox’s Tokyo headquarters on February 14th. “The question is whether people will get any of their assets back.”

By Jarrett Neil Ridlinghafer
Chief Technology Analyst
Compass Solutions, LLC

Over 322,000 PS4 consoles sold on Japanese opening weekend


The PlayStation 4 appears to be off to a good start in Japan. According to figures from Japanese magazine Famitsu, 322,083 PS4 consoles were bought in the first two days on sale — that’s nearly four times as many as the 88,443 PS3 systems sold on its opening weekend back in 2006.


Nearly four times as many as PS3

However, the circumstances were a little different. While the PS3 was extremely supply-constrained for its first few months, and its successor remains hard to find around the world, Sony appears to have secured a lot of PS4 stock for the Japanese launch.


It’s also worth noting that the Wii U sold 308,570 units in Japan in its first week on sale before Nintendo’s numbers nosedived. Still, if Sony can maintain a healthy pace of sales in its home territory as it has elsewhere, the Japanese giant has reason to feel encouraged.

By Jarrett Neil Ridlinghafer
Chief Technology Analyst
Compass Solutions, LLC

Samsung dives into fitness wearables with the Gear Fit


Samsung applies its smartwatch technology to something you can wear to the gym

By Dan Seifert

Alongside the new Gear 2 and Gear 2 Neo announced at Mobile World Congress in Barcelona, Samsung is expanding its line of wearable technology to include a fitness tracker. The new Gear Fit is a downsized version of its smartwatch siblings, with a focus on tracking your heart rate and counting your steps. It looks like a Samsung version of the popular wrist-worn Fitbit trackers, but with a larger screen and quite a few more functions.

The display on the Gear Fit is the real draw: it’s a rectangular, curved AMOLED touchscreen panel with characteristic-for-Samsung vibrant colors and exceptionally wide viewing angles. The curve in the display allows the Fit to conform around your wrist yet still have a large enough screen to make text readable and buttons easy to press with your finger. It’s not the first product from Samsung to have a curved display, but it does feel like it’s the first to actually benefit from its curvature. Plus it just looks cool.

The Fit doesn’t have the camera, microphone, or speaker of the Gear 2 and Gear 2 Neo, but it still can receive all of your smartphone’s notifications and alerts, making it one of the smartest fitness trackers we’ve seen yet. The focus is clearly on fitness, however, as the Fit includes a real-time fitness coach to encourage you to speed up or slow down via alerts, the ability to measure your heart rate in real time, and syncing with Samsung’s S Health apps on Galaxy smartphones.

It’s significantly more comfortable to wear than the other Gear smartwatches, mainly due to the fact that it’s half the weight and much narrower. The rubber strap is interchangeable and available in a variety of colors, and its basic clasp is pretty easy to close with one hand. The band’s soft-touch finish was comfortable in the few minutes we had to wear the Fit, though we’ll have to see how comfortable it is when breaking a sweat at the gym.

Samsung promises that the Fit will last three to four days between charges with normal use, and light users will be able to eke out even more time than that. Like the new Galaxy S5 and other Gear smartwatches, it’s IP67 rated for water and dust resistance, so it shouldn’t stop working when it’s doused in sweat or rinsed off in the shower.

Fitness wearables are becoming increasingly popular, and Samsung is a bit late to enter what is already a pretty crowded market. But from what we’ve seen thus far, the Gear Fit is quite impressive, and its integration with Samsung’s existing ecosystem is powerful. Samsung isn’t yet revealing how much the Fit will cost when it arrives on April 11th, but if it’s able to offer it for a reasonable price, we could see it easily rising to the top of the fitness-wearables heap.

By Jarrett Neil Ridlinghafer
Chief Technology Analyst
Compass Solutions, LLC

Apple Security Bug Could Let Hackers Intercept Encrypted Data

Security & Compliance-6

Apple on Friday quietly pushed out an update for its mobile devices to fix a major security flaw that could allow attackers to intercept encrypted email and other data. Experts warn that Mac desktops and laptops are still at risk.

The flaw, which relates to how iOS 7 validates the SSL certificates intended to protect websites, could let an attacker on the same network as a victim eavesdrop on all user activity. Apple did not reveal too much information about the problem, though experts who have studied the bug said hackers could launch so-called man in the middle attacks to intercept messages as they pass from a user’s device to sites like Gmail, Facebook, or even online banking.

“An attacker with a privileged network position may capture or modify data in sessions protected by SSL/TLS,” Apple said in its advisory.

As PCMag’s Security Watch blog noted, SSL certificate validation is “critical” for establishing secure sessions with websites.

“By validating the certificate, the bank website knows that the request is coming from the user, and is not a spoofed request by an attacker,” PCMag’s Fahmida Rashid wrote. “The user’s browser also relies on the certificate to verify the response came from the bank’s servers and not from an attacker sitting in the middle and intercepting sensitive communications.”

A patch is available for the iPhone 4 and newer Apple smartphones, as well as the iPod touch (5th generation), iPad 2, 3, and Air. Those who have not already installed the update should do so immediately.

But the problem doesn’t end there. The same flaw also affects the latest version of Apple’s Mac OS X desktop software, which has several applications like Safari that rely on the faulty SSL/TLS library, called SecureTransport, Adam Langley, a senior engineer at Google, wrote in a blog post. At this point, OS X has not yet been patched, though a fix is expected soon and users should install it as soon as it’s available.

While waiting for the patch, there are a few ways to stay safe. For starters, avoid connecting to other people’s Wi-Fi networks, even if they are password-protected, Paul Ducklin, head of technology at security firm Sophos, wrote in a blog post Monday. If you are using a Mac for business, consider asking your employer to set you up as part of the company’s VPN if they have one.

It’s also a good idea to use alternative browsers like Firefox or Chrome until the patch is out. These browsers use their own SSL/TLS libraries, thereby “immunizing them against the bug in Apple’s SecureTransport library,” Ducklin wrote. Once the fix is available, it will be safe to switch back to Safari.

By Jarrett Neil Ridlinghafer
Chief Cloud Consultant
Compass Solutions, LLC

Data produced by wearable devices are providing users with a new view on how they can maintain a healthy lifestyle.


Contributed by: Wesley Robison

What is your ideal vision of health and how do you measure it? Whatever metric you may use, it’s likely a different standard than the person standing next to you. Over the years, people have counted calories and reps to benchmark and compare their relative fitness, but as our understanding of health has evolved, there are more and more stats to track. Thankfully, with an explosion of wearable technology and mobile apps, much of that work has moved from manual entries in notebooks and spreadsheets to a level of automated tracking.

In fact, we’re at a point where our devices are not only able to seamlessly monitor a wider range of activities and behaviors from sleep and steps to stress, but make sense of that data though meaningful stories and visualizations, supporting us on our individual journey to healthier living. This trend of Holistic Tracking is featured in PSFK LabsFuture of Health report, further exploring the devices, metrics and visualizations that companies are using to educate consumers and help them actively manage their health.

While the demographics of the global market are continually expanding and becoming more diverse, one unifying factor in the quest for greater health is the relationship between access to information and behavior change. In a 2013 study from Pew Research, 46% of people who tracked their health say that this activity has changed their overall approach to maintaining their health or the health of someone for whom they provide care.

In a conversation with Travis Bogard, Jawbone’s VP of Product Management & Strategy, he noted, “We see this huge gap that exists between intention and action ‑‑ what people think they’re doing, and what they’re actually doing ‑‑ and I think the transparency of seeing that starts to help people understand what their patterns are, and where can they make adjustments to live the life that they really want to.”

As brands and healthcare providers look to engage consumers around the Holistic Tracking trend, the PSFK Labs’ team suggests considering the following questions:

  • What are the next wave of personal metrics that are going to be essential for maintaing good health?
  • How do we move from historical tracking to predictive warnings, and what lifestyle behaviors should be the focus?
  • How do we standardize the data being gathered and make it shareable with the wider healthcare system?
  • As this data is shared with insurance companies and providers, how do we ensure that consumers maintain ownership and receive greater value?
  • What new services will be needed to connect and analyze a wider range of data sources, and deliver deeper meaning?
  • How can we tap into “in the moment” achievements or long-term goals to support consumers on their goal to better health?

With the help of our partner Boehringer IngelheimPSFK Labs has released the latest Future of Health Report, which highlights the four major themes and 13 emerging trends shaping the evolving global landscape of healthcare. To see more insights and thoughts on the Future of Health visit the PSFK page.

By Jarrett Neil Ridlinghafer
Chief Cloud Consultant
Compass Solutions, LLC


Tenable Adds Cloud Management and Multi-Scanner Support to Nessus


Tenable Network Security, Inc., the leader in real-time vulnerability and threat management, today announced powerful cloud management capabilities will be delivered to Nessus users in a March 3rd update. Departments, teams and remote locations will have, as part of their subscription, the ability to control internal and external scanners from a primary scanner. Nessus customers with Nessus Perimeter Service will be able to do so through the cloud. Nessus is also introducing a new simplified view of scan findings, affected hosts, and compliance status with one-click drill-down for details.

Many organizations have multiple scanners for different segments of their networks and geographical locations. Managing multiple vulnerability scanners, scheduling scans, and processing results can be a challenge for organizations with a single person or small team responsible for vulnerability and compliance scanning.

“The introduction of the new multi-scanner management capability in Nessus allows users to benefit from the robust capabilities of the most widely-used vulnerability scanner in the world—while saving time, effort, and resources by managing internal and external scanning from a single point in the cloud or on premise,” said Ron Gula, CEO of Tenable Network Security.

Key new features in Nessus include:

  • Cloud management portal—Nessus Perimeter Service can now be used as a primary scanner and will be able to control multiple secondary Nessus scanners (internal Nessus scanners or Nessus Amazon Machine Images) regardless of location. At no extra charge, Perimeter Service customers may also submit up to two external scans per calendar quarter for Tenable PCI ASV validation.
  • Multi-scanner support—Users can control multiple secondary Nessus scanners (internal Nessus scanners or Nessus Amazon Machine Images) and view scan results from a primary Nessus scanner on premise.
  • Simplified scan results view—Scan output is now organized by findings followed by a list of affected hosts. With one click, users can quickly drill down on host details for that vulnerability. Compliance results show passed/failed/skipped status, affected hosts, and the reason for the status.

“The new features in Nessus build on a year of important configuration, post-scan analysis, and usability improvements,” said Renaud Deraison, chief research officer at Tenable and the inventor of Nessus technology. “Now teams, departments and organizations can maximize security to match their needs and available resources — whether through Tenable’s enterprise platform SecurityCenter or through these new Nessus features.”

The new Nessus features will be available on March 3, 2014, at no additional charge to Nessus and Nessus Perimeter Service customers.

About Tenable Network Security

Tenable Network Security is relied upon by more than 20,000 organizations, including the entire U.S. Department of Defense and many of the world’s largest companies and governments, to stay ahead of emerging vulnerabilities, threats and compliance-related risks. Its solutions continue to set the standard to identify vulnerabilities, prevent attacks and comply with a multitude of regulatory requirements. For more information, please visit www.tenable.com.



By Jarrett Neil Ridlinghafer
Chief Cloud Consultant
Compass Solutions, LLC

AFORE Expands CloudLink Platform with SecureVM and SecureFILE

Security & Compliance-6

AFORE Solutions, Inc., a leader in cloud security and data encryption management, today announced the addition of CloudLink SecureVM and CloudLink SecureFILE to the CloudLink encryption platform. The additions build on the existing CloudLink SecureVSA and provide AFORE customers with unmatched flexibility to layer encryption at multiple points of the cloud computing stack with storage, virtual machine, file and application level solutions deployed and managed from a common framework.

As cloud initiatives ramp and enterprise workloads become distributed across hybrid and multi-tenant virtualized infrastructures, it becomes essential to secure sensitive data and meet regulatory compliance as well as lock down the cloud from external and internal threats. CloudLink provides the encryption foundation that protects mission-critical data across a broad range of use cases from a single integrated platform. CloudLink integration with industry leading hypervisor and cloud platforms enables IT personal to efficiently deploy security controls at all levels of the infrastructure. The net impact is better control, lower TCO and improved business agility to secure sensitive data and embrace the cloud with confidence.

“The world has changed and IT needs to adapt to meet a host of new and pervasive threats to data. With these two new CloudLink modules, we’re providing our customers the means to ensure that every touch point in their cloud infrastructure is secure,” said Jonathan Reeves, Chairman and Chief Strategy Officer at AFORE. “Our focus is to empower customers with solutions that seamlessly integrate with their virtualization, cloud and key management platforms enabling them to have full control over data security while leveraging existing IT processes.”

The addition of the two new CloudLink encryption modules, SecureVM and SecureFILE, provide customers greater deployment flexibility by offering more granular security controls for a wider range of use cases for securing hosted desktop / DaaS and business applications. CloudLink SecureVM addresses a critical need to protect boot volume integrity for both application server and VDI machine images in virtualized and public cloud environments by providing secure pre-boot authentication and encryption of volumes. CloudLink SecureFILE enables fine grained policy to target encryption on sensitive applications such as Microsoft SQL server, SharePoint, Exchange and Office applications with unique application templates that simplify security deployment. SecureFILE file level encryption is also specifically designed to support highly scalable file servers providing the granularity required to preserve their advanced file management capabilities.

As more sensitive workloads move to the cloud, service providers such as IPR International, a leading provider of IaaS, disaster recovery, and data protection, are leveraging CloudLink to offer Encryption as a Service (EaaS) as a value added service on top of their computing and storage offerings. “CloudLink provides us with an encryption management platform that enables us to provide EaaS to secure a broad range of customer initiatives including hosted virtual desktops, application and database servers and virtualized storage,” said Bryan Durr, IPR’s Director of Product Architecture. “Easy customer provisioning and advanced security options that place key control in our customers’ hands allow us to offer a first-class security service that will help drive customer confidence and adoption of the cloud.”

AFORE will be showcasing the CloudLink platform at the RSA Conference at South Expo booth #2501, February 24-28 at the Moscone Center in San Francisco.

About AFORE:

AFORE is a leading provider of advanced data security and encryption management solutions that protect sensitive customer information in multi-tenant private, public and hybrid clouds. AFORE CloudLink been recognized for several prestigious industry awards including the Best of VMworld Gold Award in the Security / Compliance for Virtualization category in 2013. CloudLink SecureVSA has been certified Vblock Ready by VCE to run on Vblock Infrastructure Platforms. AFORE is an EMC Select Business Partner as well as an RSA Technology partner. For more information visit: www.aforesolutions.com and follow us on Twitter @aforesolutions .

About IPR International:

IPR International is a recognized industry leader offering co-location, private cloud, and Infrastructure-as-a-Service (IaaS) solutions, as well as cloud-based data management, data protection services, and a full range of managed solutions. IPR protects, preserves, and secures its clients’ data, making it available at all times, no matter when or where the data was created or is needed. Headquartered in Wayne, PA, IPR has multiple redundant data centers and serves clients in 17 countries around the globe.

Copyright © 2014 All Rights Reserved. CloudLink is a trademark of AFORE Solutions, Inc. All other trademarks, trade names, service marks and logos referenced herein belong to their respective companies.



By Jarrett Neil Ridlinghafer
Chief Cloud Consultant
Compass Solutions, LLC

U.S. Businesses To Feel Impact From NSA Fallout, Says Richard Clarke


byRobert Westervelt on February 24, 2014, 3:51 pm EST

U.S. technology companies are losing ground to international competitors as a result of the fallout from National Security Agency leaks, and the extent of the negative impact on the economy is not yet known, said Richard Clarke, president of Good Harbor Security Risk Management and one of the members of President Barack Obama’s surveillance review panel.

Speaking to hundreds of attendees at the Cloud Security Alliance Summit, Clarke, who served as special adviser to the president for cybersecurity, and national coordinator for security and counterterrorism for the last three U.S. presidents, said the market share slide in Europe, Latin America and Asia are a consequence of poor U.S. policymaking that resulted in surveillance activities at the NSA, the FBI and CIA that had gone relatively unchecked and increasingly out of control.

“There was a complete disconnect from the policymakers and their desire to collect information and the people who were collecting it,” Clarke said. “Policymakers have to spend a great deal of time being very specific about what intel they want and need and what intel they don’t want to be collected.”


The daylong Cloud Security Alliance Summit, which is held a day before the official start of RSA Conference 2014, has long been looking to establish trust and visibility into the security processes behind cloud-based service providers and their systems. The impact of the NSA leaks on cloud services could be damaging abroad, Clarke said. Clarke railed against government proposals in Europe and Latin America that propose laws that limit cloud and require data to be geographically housed within certain territories. Certain countries are using the NSA leaks as propaganda because of economic interests and a desire to boost local companies against international competitors, Clarke said.

“I don’t think I’m going to get into any trouble if I say that NSA and any other world-class intelligence agency can hack into databases even if they are not in the United States,” Clarke said.

Clarke urged summit attendees to read the review panel report, a 303-page document issued in December. He said trust in encryption needs to be reestablished to promote adoption of strong data protection practices. He called on the U.S. government to immediately and appropriately disseminate information about zero-day vulnerabilities so that weaknesses can be fixed quickly, calling the practice essential to defending the nation’s critical infrastructure systems.

Solution providers told CRN they have not yet felt any negative impact or any pushback from their clients regarding the software or hardware they sell. The move to cloud-based services and infrastructure has been a gradual shift over time and continues to be happening at a slow but steady pace, despite the NSA leaks said Mark Robinson, president of Findlay, Ohio-based managed IT security and service provider CentraComm.

“I think that there’s been so much NSA news coming out that people have grown numb to it at this point,” Robinson said. “Data protection and cloud security are important issues to our clients, and at the end of the day, they’re going to desire it from all of the companies they do business with and we recognize that.”

Businesses want infrastructure and services that are reliable and will have a positive impact to the bottom line, not a negative one, said Pat Grillo, president and CEO of Atrion Communication Resources, a Branchburg, N.J.-based RSA partner. Grillo, whose firm partners with RSA,  said he hasn’t seen any negative impact despite allegations that RSA and other technology companies aided NSA surveillance activities — allegations that the vendors deny.

The  NSA, FBI and CIA have a group of incredibly talented people dedicated to protecting the country, Clarke said. The agencies are tracking down terrorists and people trafficking weapons of mass destruction. They are working to uncover operations of human trafficking and other human rights violations for the U.S. and its allies, Clarke said.

“We did not find people listening to your phone calls and emails. They’re not doing that but they could and that was the central problem,” Clarke said.

Clarke said the review panel discovered that while the NSA was good at getting into networks to collect information, its internal security practices were poorly maintained and based on perimeter defense methodology and the idea that once people are vetted, they can be given access to the network.

“It was abysmally poor and criminally negligent on security of its own internal network security,” Clarke said of the NSA. “The lesson here was when you say you are putting perimeter defense as a model behind you, that is good rhetoric, but follow up on it with good internal security as well. They didn’t.”

Bitcoin Hacked

Clarke called for the formation of a national privacy and civil liberties oversight board that would be given authority to review all intelligence agency activities. People need to know that there is someone protecting civil liberties and ensuring that privacy rights are being maintained, he said. Clarke also called for the development of international standards on appropriate activity conducted by intelligence agencies. He said a dialogue should be established with other countries about appropriate behavior and when activity crosses the line.


By Jarrett Neil Ridlinghafer
Chief Cloud Consultant
Compass Solutions, LLC

360Heros Announces Breakthrough 360 Video Camera Gear that Takes Full Sphere 12,000 by 6,000 Video and Photo Imagery using GoPro Cameras


Michael Kintner, CEO, inventor, founder and manufacturer of 360Heros Video Gear announces the release of the H3Pro10HD (360 VIDEO IN SUPER HD) camera gear for production of 12,000 x 6,000 full sphere Seamless Dome imagery. This breakthrough system uses the mighty GoPro Hero3 Black or Hero3 Black Plus for the most compact and powerful 360 camera video and photo array. To assemble the 360 Plug-n-Play HD Edition, simply snap in 10 GoPro Hero Blacks to produce 12K mammoth content for IMAX size screens in immersive screening environments, or scale down the imagery for stunning sharp video and photos for any dome size, projection system or web viewer.

Designed in response to overwhelming requests from dome content producers and exhibitors, the H3Pro10HD supports production at resolutions and frame rates previously only possible with render-intensive CGI content or expensive and complex mirror camera rigs. The 360Heros gear and support software packages allow producers to shoot, stitch and present content on lightning fast timelines compared to CGI. The possibilities for applications are endless – from high end immersive cinema productions, to interactive entertainment, sporting and educational shows, to virtual reality integration with the Oculus Rift VR.

The 7 different versions of 360Heros 360 Plug-n-Play holders are constructed from airline grade flexible nylon, and designed to capture Full Spherical HD 360 degrees by 180 degrees video on the most challenging shoots, from extreme environments to intimate spaces. The rigging potential of the lightweight (1.5 oz – 1.8 oz with GoPro Hero3 Blacks) video gear for the land, sea or air is limitless. In the past few months, 360Heros video and photo gear has been used to film sharks off a Micronesian reef in Yap, been to the top of Mount Everest summit trek, an Arctic documentary, the Webby-award winning Beck/Chris Milk interactive concert, and a wide range of large format display and interactive projects for companies such as Under Amour, Ford and Visa as well as the United States Military.

360Heros is excited to share our newest technology for delivering content All Around You. Be sure to come visit with us in January 7-10 at the 2014 CES show in Las Vegas with 3D Systems and Cubify.com. Drop by to see our demos and videos at booth #31424 to get a first hand look at Land, Sea and Air 360 Video and Photo presentations with equipment that is all 3D printed. Visit http://www.360Heros.com to see how easy it is to create stunning full spherical HD 360 experiences.

About Michael Kintner
Michael Kintner is the CEO, inventor and founder of 360Heros, a cutting edge 360 degree video company with products and services that utilize GoPro cameras and other small HD quality cameras for ultra high resolution 360 video production in any environment. Michael holds a BS in Engineering and Information System Technology and a MA in Information Technology Management, and combines his experience in videography, aerial photography, mechanical engineering and robotics to design, develop and manufacture 360Heros state-of-the-art 360° Plug-n-Play holders. For more information on 360Heros products and video production & post production services, visit http://www.360heros.com. For media inquiries contact Diane Dennis at Inspired Media Communications at info(at)inspiredmc(dot)com.

By Jarrett Neil Ridlinghafer
Chief Cloud Consultant
Compass Solutions, LLC3BA7CC0F-C3BB-D6B0-751B8CD0003E0E17

Digital Advertising Alliance Unveils Program to Provide Consumer Privacy Controls in Mobile Environments


The Digital Advertising Alliance (DAA), the operator of the interactive media and marketing industry’s largest and most successful consumer preference program, today unveiled new guidance for assuring that its Self-Regulatory Principles currently enforced on the web are honored in mobile environments, providing consumer-friendly privacy controls in this fast-growing medium.

The new DAA guidance for the first time advises advertisers, agencies, media, and technology companies how to provide consumers the ability to see and exercise control over the use of cross-app, personal directory, and precise location data in mobile apps. This enforceable guidance reflects the reality that companies and brands engage with their customers on a variety of platforms, including mobile, and explains how the DAA’s program applies consistently across channels to certain data practices that may occur on mobile or other devices. A key provided benefit is enhanced consumer transparency for the collection of data across different mobile apps. A full copy of the Mobile Guidance is posted here: http://www.aboutads.info/DAA_Mobile_Guidance.pdf

The guidance builds on the 2010 Self-Regulatory Principles for Online Behavioral Advertising and its 2012 Principles for Multi-site Data, which are the bases for the digital ad industry’s successful Advertising Option Icon initiative. This self-regulatory program the largest and most comprehensive in the interactive advertising industry is responsible for the ubiquitous Advertising Option Icon that is served on trillions of advertising impressions a year, signaling to consumers that various forms of online data are being collected and used to deliver them advertising tailored to their interests, and offering them a centralized system to opt out of the use of that data.

The DAA consumer notice and choice program has been praised by the Obama Administration, the Commerce Department and the Federal Trade Commission. The DAA’s mobile guidance released today aims to offer businesses policies and consumers assurances that the same notice and control consumers have in desktop Internet advertising environments will apply in the more complex, evolving world of the mobile Internet.

“Consumers love being mobile,” said DAA Managing Director Lou Mastria. “And they also understand that much of the enjoyment, convenience and information they receive in mobile environments are supported by advertising. Brands, too, are increasingly converging communications channels making little distinction as to how customers interact with them. Our newly announced mobile guidance is intended to help brands, agencies, ad networks and mobile providers navigate data collection and use across mobile and other platforms according to our established self-regulatory principles all backed by third-party enforcement.”

Added DAA General Counsel Stu Ingis: “The success we’ve achieved in the desktop browser ecosystem has proven the merits of this program. The recognition of the Advertising Option Icon has grown, consumers are learning about the value of the ad-supported ecosystem, and the economic engine that is digital advertising continues to innovate and provide jobs in a privacy-friendly manner. As more consumers go mobile with their devices, we are taking this successful program and reaching them there.”

“We believe this technology-neutral guidance will continue to enable relevant, ad-supported content to flourish while providing mobile users with consistent privacy mechanisms such as transparency, choice, de-identification, and limitations on data use, which have become commonplace on the desktop web with the DAA Icon program,” Ingis said.

The new guidance explains how to provide enhanced notice about data collected on a particular device over time and across different applications, as well as notice for the collection of precise location or personal directory data. It also applies the principle of consumer control to these data collection practices.

Additionally, the guidance applies to the mobile space protections for sensitive data collection, as well as the restriction on the use of covered data for eligibility purposes. The new guidance also makes clear that these data collection and use practices fall within the scope of the DAA’s existing accountability programs, enforced by the Council of Better Business Bureaus and the Direct Marketing Association.

The development of the mobile guidance has incorporated a nearly two-year effort among mobile marketing stakeholders, from brands to mobile providers. Today’s announcement begins an implementation phase. During this phase, the guidance will not yet be in effect or enforced while industry education about the guidance is undertaken.

The DAA’s cross-industry self-regulatory initiative spans the entire marketing-media ecosystem. Collectively, the associations which make up the DAA represent more than 5,000 leading U.S. corporations across the full spectrum of businesses that have shaped today’s transformed media landscape to meet consumer demand for innovative products and services. International DAA programs are currently operating in more than 25 nations around the world.

About The DAA Self-Regulatory Program for Online Behavioral Advertising
The DAA Self-Regulatory Program (http://youradchoices.com) for Online Behavioral Advertising was launched in 2010 by the Digital Advertising Alliance (DAA) (http://aboutads.info), a consortium of the nation’s largest media and marketing associations including the American Association of Advertising Agencies (4A’s), the Association of National Advertisers (ANA), the American Advertising Federation (AAF), the Direct Marketing Association (DMA), the Interactive Advertising Bureau (IAB) and the Network Advertising Initiative (NAI). These associations and their thousands of members are committed to developing effective self-regulatory solutions to consumer choice for online data.

More on the Mobile Guidance from Founding DAA Participants

Nancy Hill, President and CEO, 4A’s
“Expanding the DAA program in the mobile environment proves that such a self-regulatory program can meet the ever-pressing challenge to evolve with new technologies. We felt that it was imperative that our members play a key role in such a pivotal effort to advance the art of advertising while bringing consumers more transparency and control in the mobile world.”

James Datri, President and CEO, AAF
“The advertising industry has long had the gold standard for industry self-regulation,” said AAF President James Edmund Datri. “This move into the mobile environment demonstrates our commitment, once again, to consumer trust and protection. Technologies and communications methods may change, but our promise to treat our customers in a fair and ethical way never will.”

Bob Liodice, President and CEO, ANA
“The DAA program is not only global, but now also mobile. The expansion of the DAA program into the mobile realm is an extraordinarily important development. Mobile is, by far, the fastest growing media category. Half of all U.S. adults now have a connection to the web through either a smartphone or tablet, so it’s all the more critical that they have control over how they receive advertising on their mobile devices. The DAA mobile guidelines give them that control.”

Carrie Hurt, Interim President & CEO, Council of Better Business Bureaus
“Deploying the DAA program across the mobile ecosystem brings us another step forward in providing greater compliance and transparency for consumers through better privacy disclosures and controls.”

Linda Woolley, President and CEO of DMA
“Using data in a responsible manner is essential in building and keeping consumer trust, and provides great benefit to businesses and consumers alike,” said DMA President and CEO Linda A. Woolley. “DMA’s mission is to advance and protect responsible data-driven marketing, and DAA’s next step into the mobile space is a great example of how self-regulatory programs can be implemented across the full spectrum of technology platforms.”

By Jarrett Neil Ridlinghafer
Chief Cloud Consultant
Compass Solutions, LLC

Promisec Unveils Promisec Integrity, a Cloud-Based Solution for Rapid Endpoint Security and Remediation

Security & Compliance-5

Promisec, a pioneer in endpoint security and compliance solutions, today announced plans for Promisec Integrity, a series of cloud-based offerings to help small-to-medium enterprise organizations with endpoint security and remediation. The full-suite of offerings enables IT organizations to ensure compliance, defend against the rising number of cyber threats impacting business today and validate endpoint integrity across all deployed security and IT endpoint technologies. The launch marks a major evolution for endpoint security in the cloud, with Promisec rolling out new solution offerings throughout the remainder of the year.

In today’s constantly evolving threat environment, IT organizations must take more aggressive security stances than ever to stay ahead of vulnerabilities. However, many existing endpoint protection platforms fail to stop targeted attacks. According to Gartner’s recent report, “Malware is Already Inside Your Organization; Deal with It,” published February 12, 2014, “security organizations must assume they are compromised, and, therefore, invest in detective capabilities that provide continuous monitoring for patterns and behaviors indicative of malicious intent.” To that end, the report says, “a new class of endpoint detection and response (ETDR) vendors is starting to fill the gaps left by traditional endpoint protection platform (EPP) suites. [They] provide ‘indicators of compromise’ (IOCs) to lead detection efforts and real-time investigation tools that allow for rapid interrogation of endpoints and endpoint history.”

Promisec Integrity provides continuous monitoring for advanced threat detection by capitalizing on the collective intelligence from existing security and management tools to deliver certainty that all tools and processes are operational, agents are up to date, and software is patched. This provides IT organizations with the ability to improve their incident response time, and thus limit business impact. Promisec will be unveiling specific solutions available for Promisec Integrity through the end of 2014, beginning with a cyber detection capability in the coming weeks. Other planned additions to Promisec Integrity include:

  • Endpoint inspection – a continuous monitoring service that will scan and report on the entire endpoint environment to reveal its security state, including uncovering unauthorized applications, validating antivirus and SCCM/patch management solutions are functioning properly, and ensuring compliance with corporate IT policies
  • Tamper detection – a file validation service that will confirm if files have been tampered with or corrupted by attack, either as it is happening or after an attack
  • Crisis resolution – an incident response service that will sweep customer endpoints for specific issues—such as a specific malware attack or older version of a software that is prone to a hack—and offer on the spot remediation and reporting on the current status of those endpoint environments

The introduction of Promisec Integrity marks an important milestone for the company as it moves to a cloud-delivery model. This strategy is designed to make enterprise-grade endpoint security and compliance, where the company has more than a decade of experience, accessible for small-to-medium sized enterprises with limited resources. It ensures easy implementation, painless upgrades and simple management, all at a more affordable price point.

“We are entering the next phase in Promisec’s evolution, one that puts endpoint security in the cloud so it is simpler to manage even as the threat landscape becomes increasingly challenging,” said Dan Ross, CEO, Promisec. “Promisec Integrity is like a ‘don’t panic’ button that can quickly provide peace of mind—and a course of action—for small-to-medium enterprises that must get ahead of the latest threats before they negatively impact corporate IP, operational efficiency and, ultimately, brand trust and profitability.”

Promisec Integrity is available now for those interested in taking the next step in cloud-based endpoint security solutions. Demonstrations are taking place this week at the RSA Conference in San Francisco. To set up an appointment for a demonstration, contact promisec@v2comms.com and learn more at www.promisec.com.

About Promisec

Promisec is a pioneer in endpoint visibility and remediation, empowering organizations to avoid threats and disarm attacks that can lead to unwanted headlines and penalties. Promisec’s technology assures users that their endpoints are secure, audits are clean, regulations are met and vulnerabilities are addressed proactively to ensure the integrity of enterprise IT. The Promisec Endpoint Management Solution provides the power of agentless remediation before these threats can have impact, ensuring endpoint security, compliance and operational efficiency. More than 350 globally recognized companies, such as Fossil, Cognizant and Teva, trust in Promisec to stay secure, compliant and operationally efficient. Visit www.Promisec.com for more information.



By Jarrett Neil Ridlinghafer
Chief Cloud Consultant
Compass Solutions, LLC

GuidePoint Security Partners With PerspecSys to Deliver Leading Cloud Data Protection and Compliance Solution to Enterprises


MCLEAN, VA and SAN FRANCISCO, CA – RSA CONFERENCE Although cloud adoption is on the rise, many businesses especially those in highly regulated industries like financial services are lagging behind due to persistent concerns about compliance and loss of control over sensitive data. Today, PerspecSys, the leader in enterprise cloud data protection, and GuidePoint Security, a leading provider of innovative cloud data security solutions, have teamed up to help eliminate those concerns by delivering the award-winning PerspecSys AppProtex Gateway to GuidePoint customers nationwide.

GuidePoint Security has integrated the PerspecSys AppProtex Cloud Data Control Gateway into its portfolio of security offerings, which include security technology, services and support. Together, the companies provide a unique protection solution that allows enterprises and government entities take full advantage of wide application cloud functionality while ensuring their sensitive, regulated corporate data remains on premise and under their full control at all times.

Data control is the real key to ensuring privacy and protection in cloud environments. IDG Enterprise recently surveyed 1,682 companies and found that the majority of IT decision makers see security as the primary barrier to cloud adoption. And their biggest concern? The loss of control associated with giving cloud providers responsibility for securing their information. The PerspecSys Gateway removes this barrier by enabling IT and security professionals to identify what clouds are being used and when to replace sensitive data before it enters the cloud by substituting it with a random token value or encrypted cipher-text. The Gateway remains completely transparent to the cloud application users they retain use of critical capabilities such as searching, sorting and reporting, even on data that has been strongly encrypted or tokenized.

“In today’s sophisticated and growing threat landscape, it’s critical for businesses to invest in security solutions that do more than just the basics,” said Michael Volk, Managing Partner, GuidePoint. “Good is no longer good enough. That’s why we seek out best-of-breed partners, which led us straight to PerspecSys. Their AppProtex Gateway delivers cloud data compliance, privacy and control by leaving sensitive data in the hands of the enterprise. Together, we’re giving our customers the ability to take full advantage of cloud applications without the headaches associated with sharing regulated data with cloud providers.”

According to international IT advisory firm, Gartner, global Software as a Service (SaaS) spending is projected to grow to $32.8B in 2016 and global spending on public cloud services is expected to grow from $76.9B in 2010 to $210B in 2016. These figures are propelled in large part by the major value delivered by cloud-based software applications such as Salesforce.com. Companies around the globe want to take advantage of the time-savings, increased intelligence and best-in-class functionality of SaaS applications; however, data protection laws and regulations like the Gramm-Leach-Bliley Act in the United States or Australia’s Privacy Act impose strict guidelines on how sensitive data must be treated, especially when crossing international borders.

“Our goal at PerspecSys has always been to help enterprises enjoy the benefits of the cloud without any trade-offs, and working with GuidePoint Security helps us further that aim,” said David Canellos, CEO of PerspecSys. “GuidePoint has built their business on the ability to align innovative security technology with customer needs, which increasingly include a combination of security, privacy and residency concerns. They’re a trusted partner to some of the nation’s top companies and government organizations and we look forward to helping them and their customers increase protection and compliance demands without compromise.”

PerspecSys will be demonstrating the AppProtex Gateway in Booth #538 at the RSA Conference this week in San Francisco, California. To meet with our leadership team or schedule a demo, please fill out a request here: http://go.perspecsys.com/l/12862/2013-12-23/79klt.

About PerspecSys
PerspecSys Inc. is a leading provider of cloud data control solutions that enable mission critical cloud applications to be adopted throughout the enterprise. PerspecSys gives organizations the ability to understand how employees are using cloud applications and take the necessary steps to protect sensitive information before it leaves the network. By removing the technical, legal and financial risks of placing sensitive data in the cloud, PerspecSys makes the public cloud private. Based in Toronto, PerspecSys Inc. is a privately held company backed by investors, including Intel Capital, Paladin Capital and Ascent Venture Partners. For more information please visit www.perspecsys.com and follow us on Twitter @PerspecSys.
By Jarrett Neil Ridlinghafer
Chief Cloud Consultant
Compass Solutions, LLC

FireLayers Brings Policy-Based Security to Cloud Applications


Firelayers is looking to introduce a new era of cloud-based security with its latest security platform, which will allow users to individualize the program using policy-based measures.

FireLayers promised a new era of cloud security with the release of its comprehensive policy-based, compliance and IT governance security platform for cloud applications.

The cloud app security company said its unifying platform will enable organizations to “adopt cloud applications on their own terms,” according to the press release. With the new program, IT departments will be able to deploy predefined and fully customizable settings for popular applications, including Microsoft (MSFT) Office 365, Workday, Yammer and a host of others.

By allowing IT and security managers to make their own security parameters tailored to each specific workplace, FireLayers achieves unprecedented cloud security, according to the company. The offering aims to eliminate the usage of generic cloud security applications by creating one that can be suited to each user’s individual needs.

“Companies are looking for comprehensive, proactive and preventive solutions as the basis for a successful cloud application security, compliance and IT governance program,” said Doron Elgressy, co-founder and president of FireLayers, in a prepared statement. “One-size-fits-all, reactive tools can be add-ons, but not the core platform that organizations need or are looking for.”

As always, cloud and mobile security is a major topic of discussion for channel partners and the companies that they work with. Several other security companies recently have revealed mobile security platforms, such as Bluebox’s recent announcement for its mobile data security solution. In addition, Google Ventures and several other investors funded startup Ionic Security to the tune of $25.5 million just last week to help the company beef up its growing security portfolio.

According to a study conducted by Dell, many IT companies continue to underestimate the importance of security when it comes to locking down corporate data, including breaches stemming from BYOD policies. While announcements from companies like FireLayers sound promising, the new application’s success is still dependent on each individual organization’s willingness to adopt more powerful security measures to protect sensitive information.

By Jarrett Neil Ridlinghafer
Chief Cloud Consultant
Compass Solutions, LLC

Sonatype And HP Integrate To Secure Cloud Components


Software development is increasingly being typified by a componentized approach. A single application might consist of code and component modules from a multitude of different sources. While this increases agility and allows developers to truly utilize best of breed aspects of the application, it also creates a minefield of security issues.

This is just the problem that security vendor Sonatype is trying to resolve. The company sells a component lifecycle management (CLM) tool that helps developers avoid using rogue open source components in their applications. CLM also automates the process for enforcing security policies across an application.

Sonatype is announcing today that HP has integrated the Sonatype product with HP’s own cloud-based security solution – HP Fortify on Demand. As part of the integration, Sonatype provides component analysis that identifies the third party and open-source components commonly used as building blocks in modern applications. HP Fortify on Demand delivers software analysis that identifies security vulnerabilities in any application —web, mobile, infrastructure or cloud. Together, these capabilities make for a more complete software security solution by reducing an enterprise’s exposure to risk caused by the rapid adoption of open-source software components

What is means for existing HP Fortify on Demand customers is that they can create a tailored “bill of materials” listing of all the different components used in a particular application, identify which of those components have known vulnerabilities and prioritize the remediation tasks.

Sonatype also includes automated governance, monitoring and alerts, making it a fairly broad solution Sonatype claims five of the world’s largest banks and several of the US’ largest agencies as customers.

Sonatype is privately held and has received venture funding from NEA, Accel partners, Bay Partners, Hummer Winblad Ventures and Morgenthaler Ventures.

By Jarrett Neil Ridlinghafer
Chief Cloud Consultant
Compass Solutions, LLC

Vormetric Demonstrates SAP HANA® in the Cloud with Enhanced Security


SAN JOSE, Calif., Feb. 24, 2014 /PRNewswire/ — Vormetric, a leader in enterprise data security for physical, virtual and cloud environments, today announced that it will be demonstrating its cloud security technology with Virtustream and Intel at the upcoming RSA Conference 2014 in San Francisco February 24-28. The project, which has been fostered by the SAP® Co-Innovation Lab, demonstrates the SAP HANA® platform running in the cloud on the latest Intel® Xeon® processor E7 v2 family.  The managed service is performed by Virtustream, and security for the persistent storage in SAP HANA is provided by Vormetric Data Firewall™ through privileged-user controls, encryption, and security intelligence that have no noticeable performance impact on the user experience. The Vormetric technology makes it possible for the customer to be the sole custodian of the keys in encryption and access control.

The first demonstrations of this combined technology project will be at RSA Conference 2014 in the Vormetric booth 515 Moscone Center, South Hall, and in the Intel Security booth 3203 Moscone Center, North Hall. The demonstration will show the Vormetric Data Firewall used to protect the persistent storage of SAP HANA through:

  • Fine-grained file-system access control – Privileged users, or advanced persistent threats (APTs) acting as privileged users, will be prevented from accessing the persistent storage in SAP HANA.
  • Encryption and key custodianship – Persistent storage, log files and configuration files are encrypted, and the enterprise customer is the only custodian of encryption keys and access control policies. This means the cloud service provider will not have access to protected data or the ability to access or share encryption keys.
  • Security intelligence – Detailed audit logs will be sent to security information event management (SIEM) systems for analytics and reports on data access patterns. These reports can accelerate the detection of APTs, malware and abusive insiders who typically attempt to exploit the file access blind spot and operate undetected on servers.

Vormetric, Virtustream, Intel, and SAP all worked together to integrate and test the combined configuration. The tests took place at the SAP Co-Innovation Lab, the global laboratory network where members collaborate with SAP to develop and test innovative new technologies and solutions.

Ashvin Kamaraju, vice president of product development and partner management at Vormetric, said, “Attendees of RSA visiting the demo in our booth and in the Intel Security booth will see firsthand how our technology is able to deliver enhanced data-centric security for SAP HANA in cloud environments. Our goal is to ensure customers always control their data, deny third-party administrators access to sensitive data, and give customers the visibility they need for security and compliance. Vormetric is excited to be part of the technology leadership team that the SAP Co-Innovation Lab has brought together to develop innovative cloud solutions that are secure and high-performing at scale.”

“In light of recent major personally identifiable information (PII) breach news, encryption and data security should be top of mind. Our enterprise clients can help reduce this risk by using the Vormetric Data Firewall security service. For critical databases and application processes, the Vormetric service offers additional data security, compliance and control in the Virtustream environment,” explained Pete Nicoletti, CISO of Virtustream. “We are excited to be working with the SAP Co-Innovation Lab, Vormetric and Intel on this project and are looking forward to extending the Vormetric Data Firewall offering to our managed services clients on SAP HANA.”

“Enterprises are shifting investments to cloud providers that can ensure their sensitive data is protected,” said Shannon Poulin, vice president of marketing for Intel’s Data Center Group. “In order to deliver comprehensive data protection during high-transactional loads without perceptual degradation of the user experience, high performance encryption is required. Through the use of the Vormetric Data Firewall that takes advantage of Intel Data Protection technology with AES-NI and Secure Key, we expect that enterprises that run SAP HANA on Intel Xeon processor E7 v2 based cloud deployments from Virtustream will now be able to achieve this comprehensive protection.”

“The role of the SAP Co-Innovation Lab is to facilitate project-based co-innovation with its members to connect the innovation of today with the future,” said Dr. Axel Henning Saleck, vice president of the SAP Co-Innovation Lab global network. “We are very pleased to foster the efforts of Intel, Virtustream and Vormetric in developing a solution for SAP HANA in the cloud that will enable the cloud customer to be the sole custodian of policies and encryption keys, transparently controlling privileged-user access and encryption of persistent storage, logs and configuration of SAP HANA.”

Stop by the Vormetric booth 515 Moscone Center, South Hall, or the Intel Security booth 3203 Moscone Center, North Hall, during expo hours and see the Vormetric, Intel, and Virtustream solution for SAP HANA at RSA Conference 2014 in San Francisco February 24-28.

For more details on the project, see the joint white paper “Security in the Cloud for SAP HANA” from Intel with SAP, Virtustream and Vormetric.

About Vormetric

Vormetric (@Vormetric) is the industry leader in data security solutions that span physical, virtual and cloud environments. Vormetric helps over 1300 customers, including 17 of the Fortune 25 and many of the world’s most security conscious government organizations, to meet compliance requirements and protect what matters —their sensitive data —from both internal and external threats. The company’s scalable solution protects any file, any database and any application — within enterprise data center, cloud, big data environments  — with a high performance, market-leading Vormetric Data Security Platform that incorporates application transparent encryption, access controls and security intelligence. Vormetric – because data can’t defend itself.

Vormetric is a trademark of Vormetric, Inc

SAP, SAP HANA and all SAP logos are trademarks or registered trademarks of SAP AG in Germany and in several other countries. All other product and service names mentioned are the trademarks of their respective companies.

SAP Forward-looking Statement
Any statements contained in this document that are not historical facts are forward-looking statements as defined in the U.S. Private Securities Litigation Reform Act of 1995. Words such as “anticipate,” “believe,” “estimate,” “expect,” “forecast,” “intend,” “may,” “plan,” “project,” “predict,” “should” and “will” and similar expressions as they relate to SAP are intended to identify such forward-looking statements. SAP undertakes no obligation to publicly update or revise any forward-looking statements. All forward-looking statements are subject to various risks and uncertainties that could cause actual results to differ materially from expectations. The factors that could affect SAP’s future financial results are discussed more fully in SAP’s filings with the U.S. Securities and Exchange Commission (“SEC”), including SAP’s most recent Annual Report on Form 20-F filed with the SEC. Readers are cautioned not to place undue reliance on these forward-looking statements, which speak only as of their dates.

SOURCE Vormetric

By Jarrett Neil Ridlinghafer
Chief Cloud Consultant
Compass Solutions, LLC

Cisco Adds Advanced Malware Protection to Web and Email Security Appliances and Cloud Web Security


Sourcefire Integration Empowers “AMP Everywhere” for the Extended Network

SAN FRANCISCO, CA, Feb 25, 2014 (Marketwired via COMTEX) — Cisco CSCO +0.14% today announced that it has added its Advanced Malware Protection (AMP), originally developed by Sourcefire, into its Content Security Portfolio of products, including Web and Email Security Appliances and Cloud Web Security Service. The integration provides customers worldwide with comprehensive malware-defeating capabilities, including detection and blocking, continuous analysis and retrospective remediation of advanced threats. This enhanced offering represents one of the initial technology integration efforts between Cisco and Sourcefire, and extends the option of advanced malware protection for more than 60 million enterprise and commercial users currently protected with Cisco Content Security solutions.

Advanced Malware Protection utilizes the vast cloud security intelligence networks of both Cisco and Sourcefire (now part of Cisco). Like the attacks it is designed to protect against, AMP evolves to provide continuous monitoring and analysis across the extended network and throughout the full attack continuum — before, during and after an attack. By combining Sourcefire’s deep knowledge of advanced threats and analytics expertise with Cisco’s industry leading Email and Web Security solutions, customers benefit from unmatched visibility and control combined with the most cost-effective, seamless approach to addressing advanced malware problems.

Cisco has also added Cognitive Threat Analytics, acquired last year via Cognitive Security, as an option for Cisco(R) Cloud Web Security customers. Cognitive Threat Analytics is a highly intuitive, self-taught system that uses behavioral modeling and anomaly detection to identify malicious activity and reduce time to discovery of threats operating inside the network. Both Cognitive Threat Analytics and AMP are available on Cisco Cloud Web Security as an optional license.

The addition of advanced malware technologies to Cisco Web and Email Security solutions, and Cognitive Threat Analytics to Cisco’s Cloud Web Security, have expanded Cisco’s ability to provide more threat-centric security solutions for its customers by expanding attack vector coverage by providing advanced malware protection “everywhere” a threat can manifest itself. With this integration, Cisco addresses the broadest range of attack vectors across the extended network.

“Epsilon System Solutions takes a proactive stance against sophisticated attacks and turned to FireAMP to help ensure we are doing everything we can to identify, stop and remove threats on the endpoint as quickly as possible,” said Damon Rouse, IT Director at Epsilon System Solutions. “Bringing the AMP technology to the Cisco Web and Email Security Appliances and Cloud Web Security Services is a smart move that will greatly benefit customers in their efforts to protect against today’s rapidly evolving threats. AMP is the only solution we’ve seen that can combine the power of sandboxing with the innovation of file retrospection; it has helped to put us in a better position to further mitigate the impact of potential attacks.”

Instead of relying on malware signatures, which can take weeks or months to create for each new malware sample, AMP uses a combination of file reputation, file sandboxing, and retrospective file analysis to identify and stop threats across the attack continuum.

        --  File Reputation analyzes file payloads inline as they traverse the
            network, providing users with the insights required to automatically
            block malicious files and apply administrator-defined policies using
            the existing Cisco Web or Email Security user interface and similar
            policy reporting frameworks.
        --  File Sandboxing utilizes a highly secure sandbox environment to
            analyze and understand the true behavior of unknown files traversing
            the network. This allows AMP to glean more granular behavior-based
            details about the file and combine that data with detailed human and
            machine analysis to identify a file's threat level.
        --  File Retrospection solves the problem of malicious files that have
            passed through perimeter defenses but are subsequently deemed a
            threat. Rather than operating at a point in time, File Retrospection
            provides continuous analysis, using real time updates from AMP's
            cloud-based intelligence network to stay abreast of changing threat
            levels. As a result, AMP helps identify and address an attack quickly,
            before it has a chance to spread.

Christopher Young, senior vice president, Cisco Security Business Group, said: “Today’s advanced threats that can attack hosts through a combination of different vectors require a continuous security response versus point in time solutions. Web and Email gateways do a large amount of heavy lifting in the threat defense ecosystem, blocking the delivery of malicious content. By bringing together AMP and threat analytics with our Web, Cloud Web and Email Security gateways, we provide our customers with the best advanced malware protection from the cloud to the network to the endpoint.”

Advanced Malware Protection on the Network On the network, AMP continues to be an integrated capability in FirePOWER appliances for Next-Generation IPS or Next-Generation Firewall, or available as a standalone appliance. Also, FireAMP solutions provide endpoint protection for PCs, mobile devices and virtual environments, working with the FirePOWER and standalone appliance offerings through a connector.

As network speeds continue to increase, the need for higher-performing appliances capable of advanced malware protection increases. To fulfill this need, Cisco is also announcing the four latest and fastest FirePOWER appliances, all designed for compatibility with AMP. The 8350 (15 Gbps), 8360 (30 Gbps), 8370 (45 Gbps) and 8390 (60 Gbps) are stackable additions to the FirePOWER family and will work with all of the existing NetMods for modularity and mixed-media support. The FirePOWER 8300 series delivers a 50 percent increase in inspected throughput and is stackable up to 120+ Gbps of throughput.

By Jarrett Neil Ridlinghafer
Chief Cloud Consultant
Compass Solutions, LLC

RSA Conference 2014 Day One: Cloud Summit, CISO values


Though there has been plenty of discussion on RSA’s relationship with NSA leading up to this week’s 23rd annual RSA Conference 2014, held February 24-28 at the Moscone Center in San Francisco, there are plenty of IT security topics on tap for attendees.

From information security leadership needs to mobile device security, there will be a great deal of overlap between the show’s educational sessions, which are enterprise focused, and healthcare’s IT security concentrations.

Cloud security, unsurprisingly, will be a prominent theme throughout the conference and the Cloud Security Summit (CSA) Summit 2014 will be among the show’s early events on Monday from 9-1 p.m. PST. Cloud security has multiple layers, such as network repercussions and impact on mobile security, and many organizations are already using cloud-based infrastructure or are making efforts to implement cloud based services. In addition to other areas of interest, the session will cover latest threats and areas of concern and how organizations can adopt and integrate elements of the NIST CyberSecurity Framework to protect their cloud based critical infrastructures and mitigate their risk against attacks.

Another presentation of interest for healthcare organizations will be Monday’s “Advancing Information Risk Practices Seminar“, which will analyze risk management challenges such as ranking security gaps, handling business interactions and building a qualified resource pool.

Healthcare CISOs may want to check out Todd Fitzgerald’s, Director of Information Security with Grant Thornton International, “So Why on Earth Would You WANT to be a CISO?” presentation on Monday at 2:25 p.m. During his presentation, Fitzgerald will describe what the deal “DNA” of a CISO’s job is, including laws and regulations, incident handling, security strategy, control frameworks, senior management metrics, security policy, investment, auditing and data privacy. Additionally, he will break down the differences between a “Techie” and a CISO, such as differences in thought process and the ability to handle business relationships.

Also on Monday will be “Running Secure Server Software on Insecure Hardware without a Parachute“, which focuses on the state of server hardware security misconceptions. Nicholas Sullivan, Systems Engineer at CloudFlare, will discuss advanced techniques for protecting software on untrusted clients and how to apply them to servers running on untrusted hardware, including anti-reverse engineering methods, secure key management and how to design a system for renewal.

While these sessions aren’t specific to healthcare, there will be plenty of lessons to be learned for organizations looking to be proactive about new-age security threats. Continue to check back to HealthITSecurity.com this week for more RSA Conference 2014 coverage.

By Jarrett Neil Ridlinghafer
Chief Cloud Consultant
Compass Solutions, LLC

Cloud & BYOD Top List of IT Security Concerns


Despite the persistent threat of security breaches, most businesses aren’t too concerned about unknown security risks, research shows.

A new study by Dell revealed that although security breaches cost U.S. organizations an estimated $25.8 billion annually, many companies fail to effectively recognize and prioritize the next big wave of risk to IT security from unknown threats.

The research shows the majority of IT leaders around the world say they don’t view unknown security threats stemming from trends and technologies like BYOD (bring your own device), mobility, cloud computing and Internet usage as top security concerns. Therefore, these companies aren’t coming up with ways to find and address these potential threats.

 Specifically, only 37 percent of the IT leaders surveyed ranked unknown threats as a security issue they will worry about in the next five years.

“All threats expose an organization to significant risk, but unknown threats, particularly, are silent predators that can have profound and catastrophic implications on performance and continuity,” said Stacy Duncan, vice president of IT for DavCo Restaurants, an operator of more than 150 restaurants.

The study discovered that epidemic threats come from all perimeters, both inside and outside the organization. They are often hidden in poorly configured settings or permissions, as well as in ineffective data governance, access management and usage policies. Those surveyed believe it will take a collective effort to effectively protect themselves from new and unknown risks.

Eighty-five percent of the U.S. IT leaders surveyed said that organizations will need to restructure and reorganize their IT processes, as well as collaborate more with other departments to stay ahead of the next security threat.


Nearly a quarter of survey respondents highlighted BYOD as the root cause of a breach. For instance, when employees are allowed to use their own personal devices to access confidential company information, critical data can be exposed. Overall, 57 percent of survey respondents ranked the increased use of mobile devices as a top security concern in the next five years.

With that in mind, 44 percent of those surveyed said instituting policies for BYOD security is critical for preventing security breaches.

Cloud security is also a top concern for IT leaders. More than 20 percent of those surveyed said cloud apps or service usage were the root cause of their security breaches.

“Although cloud [technology] presents massive opportunities for corporate IT in terms of cost savings, security issues are rising to the forefront,” said Mary Hobson, director of eResearch South Australia, which enables discovery, innovation and collaboration by providing eResearch facilities, services, training and expertise.

While they might not be focused on unknown threats specifically, most businesses are increasing their IT security efforts. Nearly 70 percent of those surveyed have increased funds spent on employee security education and training in the past 12 months, and 50 percent believe security training for both new and current employees is a priority.

Additionally, 72 percent of U.S. businesses have increased spending in threat monitoring services over the past year.

The study was based on surveys of 1,400 IT decision makers from organizations based in the United States, Canada, the United Kingdom, France, Germany, Italy, Spain, India, Australia and China.


Cloud Security Alliance Presents Industry Leadership Award To Professor Udo Helmbrecht Of Enisa

Cloud Security

Leader of European Union’s Cyber Security Agency Honored

San Francisco, CA – RSA CONFERENCE – February 24, 2014 –The Cloud Security Alliance (CSA) has named Prof Udo Helmbrecht, Executive Director of the European Union Agency for Network and Information Security (ENISA), as the 2014 recipient of the CSA Industry Leadership Award. The CSA Industry Leadership Award is given to individuals in recognition of their contributions in advancing secure cloud computing initiatives. ENISA has been an affiliate member of CSA since its inaugural year of 2009.

“Prof Helmbrecht has approached the issue of cloud computing adoption with a strategic vision and healthy skepticism,” said Jim Reavis, CEO of the CSA. “The very diverse European market provides many unique challenges. Prof Helmbrecht has not been a cheerleader for the industry, but has engaged with all key stakeholders to support responsible adoption of cloud computing based on solid security principles. We have enjoyed a healthy partnership with ENISA that has included collaboration to create the Certificate of Cloud Security Knowledge (CCSK), our SecureCloud conferences and many other research projects. Prof Helmbrecht can take satisfaction in his role in driving a more secure cloud and greater privacy protection for European citizens.”

In 2009, Helmbrecht was appointed as Executive Director of ENISA by its Management Board, following a presentation to the European Parliament’s ITRE committee. Under Helmbrecht’s leadership, ENISA has consolidated its role as a centre of network and information security expertise, and continued its work to facilitate cooperation in network and information security across Europe. Helmbrecht has presided over many advancements in secure cloud computing, including the original cloud computing risk assessment research in 2009, as well as research on service level agreements, government clouds and incident reporting.

“As a business model, cloud computing offers many advantages over traditional computing and is revolutionising the way we use ICT”, said Professor Helmbrecht. “ENISA believes there are important security opportunities as well, and we are working together with the public and the private sector to capitalize on these opportunities. Our goal is to ensure that security of cloud computing surpasses that of traditional computing models. In this way we ensure that good security practices stimulate economic growth.

“I am honoured to receive this award from the Cloud Security Alliance and accept it on behalf of the entire staff at ENISA. The success of ENISA’s cloud security papers shows the power of collaboration between public sector, industry and industry associations. Without their engagement and support we would not be able to effectively promote better security practices in the EU. Collaborations with global industry associations, such as our partnership with CSA, are crucial for our mission: to be a centre of security expertise for the EU.”

Prof Helmbrecht is the third recipient of the CSA Industry Leadership Award, following Microsoft Corporate Vice President of Trustworthy Computing Scott Charney in 2013 and Trend Micro CEO Evan Chen in 2012.

About the Cloud Security Alliance
The Cloud Security Alliance is a not-for-profit organization with a mission to promote the use of best practices for providing security assurance within Cloud Computing, and to provide education on the uses of Cloud Computing to help secure all other forms of computing. The Cloud Security Alliance is led by a broad coalition of industry practitioners, corporations, associations and other key stakeholders. For further information, visit us at www.cloudsecurityalliance.org, and follow us on Twitter @cloudsa.

About the European Union Agency for Network and Information Security (ENISA)
The European Union Agency for Network and Information Security (ENISA) is providing expertise on cyber security issues for the EU and its Member States. ENISA is the EU’s response to the cyber security issues of the European Union. As such, it is the ‘pace-setter’ for Information Security in Europe, and a centre of expertise.

Main areas of work:

  • Set up and training of Computer Emergency Response Teams
  • Critical information Infrastructure protection (CIIP)
  • Supporting other EU actors in the fight against cybercrime
  • Conducting Cyber Security Exercises
  • Adequate and consistent policy implementation
  • International cooperation

Media Contacts:
Robert Nachbar

By Jarrett Neil Ridlinghafer


Cloud Imposters?


As I’ve stated in many many articles during the past 5 years, probably 80% of “Cloud” Hyped Services are just hosted applications attempting to grab a piece of the “cloud pie”….Do your research or contact an expert like myself or someone at Compass Solutions, LLC of Washington DC and make sure what you are doing does not hold the very real Risk of taking your company out of business….

By Lawrence Garvin

Hardly any of the services that vendors market as “cloud” adhere to the 2011 NIST definition. No wonder we’re so confused.

Here a cloud, there a cloud. Everything today is a cloud service. Except most aren’t.

In September 2011, after several years of discussion and 15 drafts, the National Institute of Standards and Technology adopted a real definition of cloud computing. Since then, however, NIST’s definition has been lost in translation due in part to opportunistic vendors and marketers, aided at times by ignorant media.

Cloud Imposter

NIST defined five essential characteristics of cloud services: on-demand self-service, broad network access, resource pooling, rapid elasticity, and measured service. Virtually no “cloud” service offered today meets all five characteristics.

NIST further defined three cloud models: software-as-a-service, platform-as-a-service, and infrastructure-as-a-service. But all three existed in practice long before the term “cloud” ever entered the vernacular.

Consider the idea that AOL and CompuServe were SaaS providers, as were the myriad ASPs (application service providers) that sprouted around the time of the dotcom boom. Several companies have provided hosted corporate email servers for a dozen years — are they not PaaS providers? It’s been just as easy to buy a hosted “computer” for running a web server or other software fully managed by the customer. This is no different from IaaS.

None of these services were then — nor are now — consistent with the true ideals of cloud computing. Certainly none of them provided on-demand self-service. If I needed to ramp up a hosted email environment or acquire a machine to host a web server, those efforts took days, contract negotiations, and a bunch of other headaches that defied “on-demand.”

Broad network access is probably the least significant of NIST’s cloud characteristics. Everything runs across the Internet, a wireless carrier’s infrastructure, or both, including simple file storage and sharing services, which are as old as FTP servers. But broad network access is the one characteristic most companies latch onto in claiming their cloud bona fides.

Resource pooling is a de facto characteristic in almost every IT environment because of the extreme levels of virtualization being employed. The latter is the biggest difference between hosting an email server today and doing so several years ago. Today, your email server is just as likely to be a virtual machine on a cluster of several dozen virtual machines — assuming you’re still running email in-house rather than just subscribing to a service.


A defining factor in whether a service is cloud based should be multi-tenancy, the ability for my services and your services to co-exist, in isolation, within the same physical environment. In contemporary terms, this is something like Office 365, whereby your organization has a group of users sharing the same email server with lots of other groups of users.

Two other defining characteristics are rapid elasticity (the ability to scale up and down quickly as needed) and measured service (whereby payment is based on actual use, not potential use).

Rapid elasticityimplies that the scaling of services happens automatically, not just when some human finally recognizes the need as requested.

If you’re paying a monthly subscription fee for a SaaS product based on the number of licensed users, that’s not measured service, so that’s not a cloud service. If you’re renting a machine in a hosting facility and you’re paying a flat monthly fee for access to that machine regardless of how much you use it, that’s not measured service. If you can’t add more machines under that hosted service automatically before you exceed the capacity of the first machine, that’s not rapid elasticity, thus not a cloud service.

It’s no wonder IT professionals — and consumers, for that matter — are so confused. Some definition offenders label their products cloud just because they’re on the Internet and are running in some virtualized infrastructure, which is the case with many SaaS products. Others have defined their own version of the term cloud, attempting to reduce the distinction to a simple matter of where the data is stored. For example, they state that the cloud is just a metaphor for the Internet, and that storing data on an office network doesn’t count as cloud storage. But that claim puts a big a dent in the concept of a private cloud environment. It’s also based exclusively on the idea of broad network access, without any consideration of the other characteristics of cloud services.

What would make the above “cloud services”? How about your SaaS provider billing you based on the actual number of users each day, or the number of actual logged-on minutes per day, or, best of all, the number of actual minutes of application used each day? For a hosted email service to truly be a cloud service, it would be billed based on the actual resource consumption each day, giving customers unlimited capacity and the ability to add and remove users on-demand. Customers wouldn’t pay more just because they added more mailboxes. So if you have to do more than call up a single Web page to change something in your service provisioning, or if you’re paying a flat fee per some time period or per user, let’s just call the service what it is: SaaS, not cloud.

Cloud Imposter

In a survey of 200 IT managers conducted by ElasticHosts in October 2012, more than 67% of respondents said they had been offered cloud services under a fixed term, 40% had been offered services that weren’t scalable, and 32% noted that the services offered weren’t even self-service. There’s no reason to think much has changed since then.

Is your company using any actual cloud services? Or are you using not-cloud services labeled as such? Do you want to get off the merry-go-round with me?

By Jarrett Neil Ridlinghafer

Cloud Security Alliance and FIDO Alliance to Drive Cloud and Mobile Authentication Initiatives


SAN FRANCISCO, CA — (Marketwired) — 02/24/14 — RSA Conference — The Cloud Security Alliance (CSA) has today announced that it has signed a Memorandum of Understanding with the FIDO (Fast IDentity Online) Alliance to promote the need for a standards approach to authentication when tackling the needs of large-scale cloud services.

The Cloud Security Alliance had previously identified authentication and the broader issue of identity as one of the critical areas for cloud computing. With the increasing dominance of the mobile device as a primary point of access to cloud services, the Cloud Security Alliance established a Mobile Working Group. They have identified the need to provide scalable authentication from mobile devices to multiple, heterogeneous cloud providers as an important step toward the maturity of cloud solutions.

“The last 12 months has seen a shift in the cloud authentication landscape as more and more providers are looking to add additional layers of protection,” said Jim Reavis, CEO, Cloud Security Alliance, “The security and usability challenges this creates means that a standards-based approach is the only practical direction. We are pleased to work together with the FIDO Alliance to encourage greater understanding of the requirements of modern authentication systems and to help our respective members to reduce the burden on their customers.”

“FIDO shares many of the same aims as the Cloud Security Alliance,” said Michael Barrett, president of the FIDO Alliance. “As we have been working on a common, industry standard for strong authentication, we have found ourselves engaged with cloud service providers who have clear requirements to deliver simple, strong authentication to meet their customers’ needs. By working together, the CSA and the FIDO Alliance will be able to ensure that these emerging standards meet these needs.”

Many of the members of the FIDO Alliance — Google, Microsoft, Nok Nok Labs, Ping Identity, RSA, SafeNet and Salesforce.com — are also members of the Cloud Security Alliance. This membership crossover shows how the common themes of cloud enablement, mobility and authentication have converged. By working together, the FIDO Alliance and the CSA are able to promote standards-based solutions to cloud and mobile authentication challenges.

Industry-driven FIDO specifications will support a full range of authentication technologies, including biometrics such as fingerprint and iris scanners, voice and facial recognition. FIDO specifications will enable existing solutions and communications standards, such as Trusted Platform Modules (TPM), USB Security Tokens, embedded Secure Elements (eSE), Smart Cards, Bluetooth Low Energy (BLE), and Near Field Communication (NFC). FIDO specifications are being designed to be extensible and to accommodate future innovation, as well as protect existing investments. FIDO specifications allow the interaction of technologies within an interoperable infrastructure, enabling authentication choice to meet the distinct needs of users and organizations.

Board members from the FIDO Alliance will be present at the CSA Summit 2014

By Jarrett Neil Ridlinghafer

Samsung and UCSF team up to tackle digital health


UCSF and Samsung are joining forces to accelerate the development of new sensors, algorithms and digital preventative health technologies.

On Friday they announced the establishment of the UCSF-Samsung Digital Health Innovation Lab, a new space to be located in UCSF’s Mission Bay campus where lead researchers and technologists will collaborate on developing mobile health technologies, as well as testing existing ones.

“Harnessing new preventative health technologies to help people live healthier lives is the next great opportunity of our generation,” said Young Sohn, president and chief strategy officer of Samsung Electronics, in a statement.

The divide between technology and medicine has traditionally been problematic in the mobile health space. Often mobile health technologies lack evidence-based approaches that would make them medically useful. And just as often medical practitioners are slow to adopt emerging technologies that might move the field forward.

The joint innovation lab intends to be the rare kind of space where entrepreneurs and innovators will be able to validate their technologies and accelerate the adoption of new preventive health solutions alongside top medical researchers.

“There are many new sensors and devices coming onto the market for consumers, but without medical validation, most of these will have limited impacts on health. Meanwhile, many practitioners also have creative ideas for new devices, but they lack the technological knowledge to fully develop them,” said Dr. Michael Blum, UCSF’s associate vice chancellor for Informatics.

Fitbit Issues Voluntary Recall For Fitbit Force Due To Skin Irritation, New Version Coming “Soon”


Fitness tracking company Fitbit has just issued a recall (and stopped sales) of their popular product the Fitbit Force, a wrist-borne activity tracker that follows their Fitbit Flex but adds a display and more advanced features. The Force was a popular item according to most estimates, and was often sold out after its launch late last year.

Complaints began to arise from users who were experiencing skin rash issues from sustained wear of the band, however. The company previously offered to replace devices with other trackers, or refund the money of those affected, but now it’s going all in on a full-scale recall. Here’s the official word from the company on the move:

We wanted to provide an update on our investigation into reports we have received about Force users experiencing skin irritation.

From the beginning, we’ve taken this matter very seriously. We hired independent labs and medical experts to conduct a thorough investigation, and have now learned enough to take further action. The materials used in Force are commonly found in many consumer products, and affected users are likely experiencing an allergic reaction to these materials.

While only a small percentage of Force users have reported any issue, we care about every one of our customers. We have stopped selling Force and are in the process of conducting a voluntary recall, out of an abundance of caution. We are also offering a refund directly to consumers for full retail price. We want to thank each and every member of the Fitbit community for their continued loyalty and support. We are working on our next-generation tracker and will announce news about it soon.

For additional information, please contact our support line at: 888-656-6381, or visit http://www.fitbit.com/forcesupport.

Fitbit cofounder and CEO James Park also released a letter to Force users and the Fitbit community addressing the issue. In it, he notes that only 1.7 percent of users have officially reported issues with skin irritation to the company. But he also details the results of an independent study conducted by an external group of experts contracted by Fitbit, which found that the problem is indeed allergic contact dermatitis, probably resulting from contact with either trace amounts of nickel or with strap materials/glue used in product construction.

Here’s where Force owners can go to request a return for their device. Affected users send in their Force, and once it’s confirmed to be affected by the recall, Fitbit will send you a physical check for the total cost of the device, which should arrive between two and six weeks after the Force is received by the company.

There’s no firm ETA on when a new (presumably hypoallergenic) version of the Force will arrive, according to Fitbit, but it’s already in the works.

Fitbit isn’t the only company to have run into trouble with an activity tracker, resulting in a recall. Jawbone had to call back its first-generation UP wristband, but for entirely different reasons: That device was prone to fatal failures shortly after consumers got it home, leading the company to take it back to the drawing board for nearly a year.


Test-driving Boston’s Bitcoin ATM


By Open Source Community
The world’s first public Bitcoin ATM popped up in Boston’s South Station yesterday. I tried it out.

itcoin has been the internet’s darling of the past year, regularly skyrocketing and plummeting in value and busting out of the tech media to grab mainstream news headlines. Few people, relatively speaking, know that much about it, and most who have heard about it know only that it’s a digital currency that only lives online.

That’s not necessarily true anymore. Yesterday, the country’s first public Bitcoin ATM popped up somewhat unexpectedly in Boston’s South Station. After all this time hearing about this potentially transformative, potentially risky new technology living on the internet, it had materialized in the real world, right in my back yard. So I had to take a look.

The LibertyTeller in South Station isn’t necessarily easy to find. I hadn’t noticed as much before, but South Station is over-run with self-service machines: subway ticketing machines, Amtrak ticketing machines, cash ATMs, even lottery ticket-dispensing machines. The LibertyTeller, by comparison, is roughly a third of the size of a cash ATM, and for now it sits on a table by the rear end of South Station, near the automatic doors leading to the Track 6 platform. I’m not sure if I would have even noticed it had I not been looking for it. Even so, I don’t know how much longer it would have taken me to find it had I not spotted the two employees wearing LibertyTeller t-shirts. (To be fair, they provide some pretty good directions on their website, but I wanted to try to find it organically).

These two young men happened to be LibertyTeller’s co-fonders Chris Yim and Kyle Powers. They said “hundreds” of people have dropped in to try the LibertyTeller since they brought it to South Station yesterday. Of course, this reception is the result of a bustling Bitcoin community, undoubtedly thrilled to see a cash-for-Bitcoin machine operating in public. Some even came from New York and New Hampshire just to try it out.

The machine, however, is not designed for Bitcoin experts. You don’t even need a digital Bitcoin wallet to use it (although the LibertyTeller supports the mobile apps). Small, paper “wallets” sat to the side of the machine, each showing two unique QR codes and a Bitcoin address. The paper is even folded and adorned with a small sticker to present only the QR code marked “Load and Verify.” Unfolding and removing the sticker reveals the “Spend” QR code, which I can imagine is the one you scan when you want to spend your Bitcoin. That was a smart move for a company preparing for long lines, should they develop. The uninitiated can’t try to scan the wrong QR code if they can only find the right one.

From there, it’s a really easy process. Press the start button, scan the QR code, then insert the cash. If you don’t believe me, here’s a Vine.

Overall, I’d say I was impressed. Perhaps the smartest move was converting the price to mBTC, or one-thousandth of a Bitcoin. Showing the amount in small decimal amounts of a single Bitcoin would turn a lot of people off. Being able to show it in the same format in which we count money translates easily to new users.

Bitcoin has had a rough month, in both value and publicity. But the LibertyTeller is a sign that it’s pushing through. Expect more of these, and don’t be surprised to see Bitcoin continue to creep out of the internet and into your daily routine.

San Francisco-based mobile enterprise app developerAcompli has raised $7.3 million


San Francisco-based mobile enterprise app developerAcompli has raised $7.3 million in a Series A round led by Redpoint Ventures with participation from Harrison Metal and Felicis Ventures. Acompli is building an all-in-one email and productivity app designed to increase the speed and ease of writing and responding to emails for professional users. Founded in 2013, Acompli plans to launch the app with a freemium-based business model in Q2 of 2014.

Redwood City-based cybersecurity startupThreatStream has raised a Series A round of $4 million


Redwood City-based cybersecurity startupThreatStream has raised a Series A round of $4 million led by Google Ventures with participation from Paladin Capital GroupTom Reilly, and Hugh Nijemanze. ThreatStream is building the first crowd-sourced cyber security intelligence solution, leveraging data and analytics to aggregate and filter through millions of threat indicators from around the internet to identify potential threats and targets in real time. Founded in 2013, ThreatStream will use the new funds to add another layer of security for enterprise and government customers.

Atlanta-based data security platform Ionic Security has raised $25.5 million


Atlanta-based data security platform Ionic Security has raised $25.5 million in a Series B round co-led by Google Ventures and Jafco Ventures with participation from theWebb Investment Network and existing investorsKleiner Perkins Caufield and Byersff Venture Capital,TechOperators, and a handful of individual investors. Ionic provides a comprehensive and unified data security platform to protect data in the cloud and on mobile devices. Founded in 2011, Ionic has raised nearly $40 million to date and will use the new funds to expand its reach and sign more customers.

3D-printed exoskeleton helps paralyzed skier walk again


Amanda Boxtel’s doctors told her she’d never walk again. But her new 3D-printed exoskeleton says otherwise.

In 1992, Boxtel was paralyzed from the waist down in a catastrophic skiing accident. But 22 years later, thanks to a groundbreaking 3D-printed robotic suit developed by 3D Systems and EksoBionics, she’s able to stand up and move around on her own.


Boxtel’s new exoskeleton, the first of its kind, was custom-built for her. Designers from 3D Systems scanned her body, digitizing the contours of her spine, thighs, and shins, a process that helped them mold the robotic suit to her. Then they combined the suit with a set of mechanical actuators and controls made by EksoBionics. The result, said 3D Systems, is the first-ever “bespoke” exoskeleton.
According to Scott Summit, the senior director for functional design at 3D Systems, the partnership between the two companies was about coming up with a way to fit the exoskeleton to Boxtel’s body in such a way that it never had hard parts bumping into “bony prominences,” such as the knob on the wrist.

Such body parts “don’t want a hard surface touching them,” Summit explained. “We had to be very specific with the design so we never had 3D-printed parts bumping into bony prominences, which can lead to abrasions” and bruising.


One problem that the designers faced in this case was that a paralyzed person like Boxtel often can’t know that bruising is happening because she can’t feel it. That’s dangerous, Summit said, because undetected bruises or abrasions can become infected. “So we had to be very careful with creating geometry that would dodge the parts of the body that it had to dodge…[designing] parts that wouldn’t impede circulation or cause bruising.”

In addition, because 3D-printing allows the creation of very fine details, Boxtel’s suit was designed to allow her skin to breathe, meaning she can walk around without sweating too much.

The process of creating the 3D-printed robotic suit lasted about three months, Summit said, starting when he and 3D Systems CEO Avi Reichenthal met Boxtel during a visit to EksoBionics. Boxtel is one of ten EksoBionics “test pilots” and Reichenthal was inspired by her vision of what might be done with her new exoskeleton.

Already, Summit said, the exoskeleton was designed to attach to the body very loosely with Velcro straps, with an adjustable fit. But it wasn’t meant for any one person.

That’s where 3D Systems came into play. By using a special 3D scanning system, Summit’s team was able to create the custom underlying geometry they used in making the parts that attach to the exoskeleton. “When the robot becomes the enabling device to take every step for the rest of your life,” Summit said of Boxtel and her exoskeleton, “the connection between the body and the robot is everything. So our goal is to enhance the quality of that connection so the robot becomes more symbiotic.”

The robotic suit, created by 3D Systems and EksoBionics, allows Amanda Boxtel, who was paralyzed in a skiing accident, to walk for the first time since 1992.

Watch Video


Printing Skin Cells on Burn Wounds


Skin is the body’s largest organ. Loss of the skin barrier results in fluid and heat loss and the risk of infection. The traditional treatment for deep burns is to cover them with healthy skin harvested from another part of the body. But in cases of extensive burns, there often isn’t enough healthy skin to harvest.

During phase I of AFIRM, WFIRM scientists designed, built and tested a printer designed to print skin cells onto burn wounds. The “ink” is actually different kinds of skin cells. A scanner is used to determine wound size and depth. Different kinds of skin cells are found at different depths. This data guides the printer as it applies layers of the correct type of cells to cover the wound. You only need a patch of skin one-tenth the size of the burn to grow enough skin cells for skin printing.

During Phase II of AFIRM, the team will explore healing wounds. The goal of the project is to bring the technology to soldiers who need it within the next 5 years.

This video — with a mock hand and burn — demonstrates the process.

Click Here for Video link

The future of medicine means part human, part computer


By CNBC’s Cadie Thompson

Forget wearable technology. It may not
be too much longer before sensors are actually put inside your body.

It may sound a little bit futuristic and far-fetched, but the reality is that ingestible sensors and implantable chips are already in use and growing.

“We are going to see more sensors everywhere. It’s only a matter of time before those migrate under our skin into our bodies,” said Peter Eckersley, the lead technologist at the Electronic Frontier Foundation.

Much like wearable devices, which can capture data about a person’s activity levels, sensors inside the body can be used to collect information about what is going on inside a person’s body.

“There’s going to be a ubiquitous data collection. Right now, the data is coming from the phone and wearable devices, but eventually some will be within our bodies. And having that data available can mean enormous health benefits,” Eckersley said.


One of the biggest health advantages of these devices is using the machines to help treat chronic illnesses, said Arna Ionescu, director of product development and user experience at Proteus Biomedical, which is working to make digital medicines.

“The thing about chronic illness it’s not something that can be solved at one appointment, it’s something that you have to manage and deal with every single day of your life,” Ionescu said. “So we are creating tools that can go in peoples’ hands and help them deal with those chronic illness.”

Proteus, is working with Novartis and Otsuka Pharmaceutical—which have both also invested in the company—to make ingestible digital pills mainstream. The company has already developed ingestible sensors that are FDA approved. The goal is for drug-makers to include the sensors in medicine to collect data that enables physicians to better monitor their patients.

Some information that can be collected from these sensors include how the patient’s body reacts to the drug, the patient’s dose timing and other physiologic responses like heart rate, activity levels and skin temperature.

While this sort of technology may play a big role in the future of how patients are monitored, it’s going to impact how drugs are brought to market more in the near-term.


Ingestible sensors can enable pharmaceutical companies to develop drugs more quickly and cost-efficiently because the devices can provide real-time data about how the medications are working.

Oracle, which has also invested in the company, is using Proteus’s technology to give its clinical trial application customers the ability to access real-time data provided by these sensors to help improve clinical trial efficiency.

But with the benefits of data collected by ingestible and implantable sensors also come unprecedented risks, experts say.

“If we could have that continuous information right now, you could tell what your immune system is fighting right now and that’s an exciting promise. But it’s going to come with a devil’s bargain. In order to obtain that data you must first agree to surrender that data,” Eckersley said.

One possibility is that the insurance companies will use the data to determine whose premiums will be higher.


In the U.S. health care system, insurance companies and healthcare providers are often fighting about who pays for what. This data stream is going to become entangled in this debate and may be used by insurance companies to determine whose premiums will be higher, he said.

“The real reason people will want them is because they will want access to data about what’s going on in their blood and in their immune system,” Eckersley said. “But even if it cures some of those illnesses, it’s going to leave us with a major privacy headache.”

Making sure the data collected by implantable devices is accurate and secure will also be critical as more of these machines come on the network, said Eric Dishman, an Intel fellow and general manager of the company’s health and life sciences group.

Dishman said that in a ten-year timeframe he expects one-third of the population will have either a temporary device or another more permanent connected device in their body and the data collected by these machines will need to be protected.

“Growth in this area means more devices on the network and with that means there is going to be the risk of hacking and we have to be ahead of that to keep it from happening,” Dishman said. “We are going to save lives, but we need to protect that data first.”

Intel is focusing on developing an end-to end solution to ensure that the data is safe and reliable as it travels from the machine inside the body to the cloud and then to a trusted physician, Dishman said.

Eckersley, though, said even protected, encrypted data still always seems to find its way into the hands of those who it isn’t intended for and information collected by sensors inside our body probably won’t be any different.

“Unfortunately, there just isn’t much we can do in today’s world to protect ourselves from corporations or governments having access to all of this information like where we go, who we meet with, what we think and what we read. In a connected world, all of this is an open book,” Eckersley said. “Sensors in our bodies may just turn into the next phase of this transition.”

Facebook to acquire WhatsApp for an eye-popping $19 billion


Facebook announced that it would acquire cross-platform messaging company WhatsApp in a $19 billion cash-and-stock deal.

Facebook said it will pay $4 billion in cash and dole out approximately $12 billion worth of Facebook shares. An additional 3 billion in restricted stock units will be granted to WhatsApp employees and founders that will vest over the next four years.

“WhatsApp is on a path to connect 1 billion people. The services that reach that milestone are all incredibly valuable,” said Mark Zuckerberg, CEO of Facebook, in a statement. “I’ve known Jan for a long time and I’m excited to partner with him and his team to make the world more open and connected.”

Facebook said WhatsApp, which enables users to send messages over the Internet, thereby bypassing their wireless carrier, brings in 450 million people each month with 70 percent of those using it on any given day.

“WhatsApp’s extremely high user engagement and rapid growth are driven by the simple, powerful and instantaneous messaging capabilities we provide,” said Jan Koum, CEO of WhatsApp. “We’re excited and honored to partner with Mark and Facebook as we continue to bring our product to more people around the world.”

Facebook said that WhatsApp will continue to operate independently and will keep its headquarters in Mountain View, Calif. Koum will join Facebook’s board of directors, and WhatsApp’s core messaging product and Facebook’s existing Messenger app will continue to operate as standalone applications.

“Facebook fosters an environment where independent-minded entrepreneurs can build companies, set their own direction and focus on growth while also benefiting from Facebook’s expertise, resources and scale. This approach is working well with Instagram, and WhatsApp will operate in this manner,” Facebook said.

Mobile continues to be a huge moneymaker for Facebook. For the first time ever, mobile made up more than half of the social network’s ad revenue in the fourth quarter.

Financial death toll rises to 6, as JP Morgan employee jumps from Asian HQ


Interesting, almost too much for coincidence……almost

A third JP Morgan employee has died under mysterious circumstances in a matter of few weeks. A still unidentified “Chinese male in is thirties” jumped from the roof of Charter House, the 30 floor Hong Kong headquarters of JPM.

An eyewitness told the South China Morning Post he saw a man climb onto the roof of the skyscraper shortly after lunchtime on Tuesday.

Despite attempts to talk him down, the man jumped before emergency crews arrived, landing on the road outside the building. The man who jumped was taken to Ruttonjee Hospital where he was declared dead on arrival.


The unidentified man, aged about 33, was a junior employee who served a supporting function at the bank and wasn’t involved in investment activity, according to Bloomberg. Rumors circulating in the media say that his last name was Li.

A colleague of the man said that before the suicide he had complained about heavy work-related stress, though police say no suicide note has been found.

“Out of respect for those involved, we cannot yet comment further. Our thoughts and sympathy are with the family that’s involved at this difficult time,” JP Morgan said in an e-mailed statement.

The latest apparent suicide marks the 3rd sudden death at JP Morgan and the 6th in the global financial world in just a few weeks.

On February 3, a 37 – year old JP Morgan executive director died at his home in Stamford, Connecticut. The cause of death, however, remains unclear and will be determined after a toxicology report is completed.

About a month ago, 39-year-old Gabriel Magee, a JP Morgan vice president in technology operations, died after falling from JPMorgan’s London headquarters.

Other apparent business suicides include a 58-year-old former senior manager for Deutsche Bank, who was found hanged in his home; Karl Slim, the managing director of Tata Motors aged 51, and 50-year-old Mike Dueker who worked for Russell Investment and was found dead on January 29 close to the Tacoma Narrows Bridge in Washington State after being reported missing on the same day.

Also reporter David Bird, who works in the Dow Jones newsroom went missing on January 11 when he left his New Jersey home for a walk.
Fine burden

Major world banks are under regulatory scrutiny over their so called “pre-crisis cheating” and multi-billion dollar rigging of benchmark and commodity rates.

JP Morgan and Deutsche Bank have been hit the hardest, with JPM being fined a record $4 billion and Deutsche Bank facing a $1.93 billion bill.

In January, JP Morgan also admitted it had aided the Bernie Madoff Ponzi scheme by turning it a blind eye, but the US Department of Justice decided then not to send anyone from the firm to jail under a deferred prosecution agreement.

In March 2013 the US Senate Permanent Subcommittee on Investigation published a 307 page report that described in detail JP Morgan’s financial irregularities and deliberate masking of some critical financial information.

More recently, JP Morgan was fined $614 million for concealing the full risk associated with the mortgage securities it sold Freddie Mac and Fannie Mae ahead of the crisis.

In September last year JP Morgan Chase agreed to pay $920 million in fines to settle probes related to the “London Whale” financial debacle of 2012. Bank employee Bruno Iksil, nicknamed “London Whale” for the size of his operations, was notorious for his “casino bets” of other people’s money, which caused the bank about $6.2 billion in losses.

Overall, eight world banking giants were fined a record combined total of €1.71bn by the European Commission for manipulating with the benchmark Libor and Euribor rates.

According to the EU investigation, Deutsche Bank, Barclays, Société Générale, RBS, UBS, JPMorgan, Citigroup and RP Martin were part of two separate illegal cartels which conspired to manipulate Euribor and Libor to benefit their own positions in euro and Japanese yen-denominated interest rate derivatives markets.

Deutsche bank is now running internal probes into whether its traders manipulated interbank and foreign exchange rates.

SkyDrive Becomes OneDrive As Microsoft Aims To Take On Dropbox And Google Drive


Peter Suciu for redOrbit.com – Your Universe Online

Microsoft’s cloud-based SkyDrive is now the one – or more accurately OneDrive. Three weeks ago the Redmond software giant announced that it was rebranding its SkyDrive cloud storage service after it lost a trademark case against the UK-based British Sky Broadcasting Group last summer.

This week Microsoft officially rolled out the rebranded OneDrive. It is much more than a mere name change however, and the move has allowed Microsoft to introduce a range of new features, including improved video sharing as well as newly updated apps for Windows Phone as well as Android, iOS and Xbox.

“When someone picks up their phone, tablet or any other device, they just want all of their favorite photos and the documents they need at their fingertips — they don’t want to have to hunt for them,” said Chris Jones, corporate vice president of OS Services at Microsoft via a statement. “That’s the lens we are taking with OneDrive. We’re building it right into all of the latest Microsoft devices and services — from Xbox to Windows Phone and Windows 8.1 to Office — but we’re also making sure it’s available on the Web and across all other platforms including iOS and Android, so your photos, videos and files are all available anytime you need them.”

On Wednesday Microsoft announced that the first 100,000 customers who access their accounts after launch will receive an additional 100 GB of complimentary storage for one year. Microsoft is already offering the first 7GB of storage for free, while users can also receive up to five more gigabytes for referrals, and a further 3GB is offered to anyone utilizing the service’s camera backup feature.

These offerings could provide Microsoft with an edge over its rivals’ cloud-based solutions. Currently Apple only provides 5GB of free storage on its iCloud service, while Google offers 15GB free on Google Drive. The popular Dropbox service however only currently offers 2GB of storage to customers for free.

Jones also noted that OneDrive is a way to make backing up to the cloud easier.

“Our goal is to make it as easy as possible for you to get all of your favorite stuff in one place—one place that is accessible via all of the devices you use every day, at home and at work,” Jones wrote on the official OneDrive blog. “Because let’s face it, until now, cloud storage services have been pretty hard to use, and the vast majority of us still have our stuff spread out everywhere. In fact, according to a recent poll (Online survey conducted by Harris Poll on behalf of Microsoft Corp from December 19-31, 2013, among 801 adults ages 18 and older who have previously heard of cloud storage.), at least 77% of people who are familiar with the cloud still have content stored on a device that is not backed up elsewhere.”

This week Microsoft also renamed SkyDrive Pro, the corporate-grade online storage service that is tied to Office 365, to OneDrive for Business. It did not provide further details except to announce that it will reveal more about this service at the SharePoint Conference, which is scheduled for March 3-6 in Las Vegas.

The name change from SkyDrive to OneDrive is not the first time Microsoft has renamed one of its products or services. In August 2012 Microsoft changed the name for its tile-based design in Windows 8 and Windows Phone from “Metro” to “Modern UI Style.” Microsoft had reportedly faced a lawsuit from German retailer Metro AG.

Source: Peter Suciu for redOrbit.com – Your Universe Online

Akamai Completes Acquisition of Prolexic


Akamai Technologies announced it has completed its acquisition of Prolexic Technologies, a privately held company based in Hollywood, Florida that provides cloud-based security solutions for protecting data centers and enterprise IP applications from distributed denial of service (DDoS) attacks.  On December 2, 2013, Akamai announced a definitive agreement between the parties pursuant to which Akamai would acquire all of the outstanding equity of Prolexic.

The combination of the two companies’ technologies and teams creates a portfolio of security solutions designed to protect an enterprise’s Web and IP infrastructure against application-layer, network-layer and data center attacks delivered via the Internet.

“Cyber-attacks are on the rise all over the world, with many resulting in significant economic or reputational damage to enterprises, governments, and end-users,” said Tom Leighton, CEO of Akamai.  “With our acquisition of Prolexic now complete, we are excited to extend the value of our combined teams and offerings to online businesses across all industries.  Our focus is to optimize and secure the ‘entire IP experience,’ Web or otherwise, and to provide an unparalleled layer of protection against the most sophisticated of attacks without sacrificing site or application performance.”

On its February 5, 2014 earnings call, Akamai provided its outlook for the first quarter 2014, including revenue in the range of $426 to $442 million and non-GAAP earnings per share (EPS) in the range of $0.51 to $0.55, excluding any effect from the Prolexic acquisition. The Company is now updating that guidance to incorporate the effects it anticipates from the Prolexic acquisition as described on that earnings call.

The Company expects that the Prolexic acquisition will result in additional revenue in the first quarter in the range of $7 to $8 million. Akamai further expects the acquisition to be dilutive to non-GAAP earnings per share by approximately $.01 per share during the quarter. As a result, the Company now expects first quarter 2014 results as follows:

•Revenue in a range of $433 to $450 million; and

•Non-GAAP earnings per share in the range of $0.50 to $0.54.

Akamai Completes Acquisition of Prolexic

Akamai Technologies announced it has completed its acquisition of Prolexic Technologies, a privately held company based in Hollywood, Florida that provides cloud-based security solutions for protecting data centers and enterprise IP applications from distributed denial of service (DDoS) attacks.  On December 2, 2013, Akamai announced a definitive agreement between the parties pursuant to which Akamai would acquire all of the outstanding equity of Prolexic.

The combination of the two companies’ technologies and teams creates a portfolio of security solutions designed to protect an enterprise’s Web and IP infrastructure against application-layer, network-layer and data center attacks delivered via the Internet.

“Cyber-attacks are on the rise all over the world, with many resulting in significant economic or reputational damage to enterprises, governments, and end-users,” said Tom Leighton, CEO of Akamai.  “With our acquisition of Prolexic now complete, we are excited to extend the value of our combined teams and offerings to online businesses across all industries.  Our focus is to optimize and secure the ‘entire IP experience,’ Web or otherwise, and to provide an unparalleled layer of protection against the most sophisticated of attacks without sacrificing site or application performance.”

On its February 5, 2014 earnings call, Akamai provided its outlook for the first quarter 2014, including revenue in the range of $426 to $442 million and non-GAAP earnings per share (EPS) in the range of $0.51 to $0.55, excluding any effect from the Prolexic acquisition. The Company is now updating that guidance to incorporate the effects it anticipates from the Prolexic acquisition as described on that earnings call.

The Company expects that the Prolexic acquisition will result in additional revenue in the first quarter in the range of $7 to $8 million. Akamai further expects the acquisition to be dilutive to non-GAAP earnings per share by approximately $.01 per share during the quarter. As a result, the Company now expects first quarter 2014 results as follows:

•Revenue in a range of $433 to $450 million; and

•Non-GAAP earnings per share in the range of $0.50 to $0.54.

Snapchat Poaches Top Google Cloud Engineer


Snapchat, which earlier has made the headlines by refusing to be acquired by top tech giants like Google and Facebook has announced that it has hired a top Google Cloud engineer to beef up its engineering talent. new-snapchat-logo-2013

A report in The Wall Street Journal says that Snapchat has recently hired services of Robert Magnussonb, who is a director at Google and works for Google App Engine, which the exact platform on which Snapchat itself works.

This might not be the only engineering hunt that we might hear because in the coming months Snapchat intends to increase its technical team from 15 to 50, Bobby Murphy, its co-founder told WSJ.

WSJ has further reported that probably Magnusson will be working towards reducing reliance of Snapchat on services like Google by building its own platform. Magnusson seems to have taken affront to this and has written on his Google Plus: ( though this is the same thing in different words)

    Ok for the record, the quote “Part of his new job at Snapchat will be building technology infrastructure in-house so that the company can begin to lessen its reliance on partners like Google, Murphy said.” is (a) not what Bobby said, (b) not really a focus of my job either.  Thx WSJ for pissing off all my old Google friends.  A more correct statement is that we’ll continuously evaluate alternatives, and likely over time develop more infrastructure ourselves, in particular in specialized areas of our apps.  Google is a great partner, and the success of Snapchat would simply not have been possible without Google Cloud, and we expect to work closely together.  Period.

Magnusson has been the head of Google App Engine platform since 2010.

Google Acquires Sound Authentication Startup ‘SlickLogin’


Google has officially acquired Israeli security startup SlickLogin and, we admit, their oft-discussed technology is pretty neat. SlickLogin focuses on using sound waves to give users a means of authenticating into websites they frequent.

Sounds crazy? It’s a fairly novel concept, we admit, and one that must have caught Google’s attention fairly quickly. SlickLogin’s technology first went into closed beta in September of last year and now, just five months later, the company has been bought out in a transaction whose details remain undisclosed.

“Today we’re announcing that the SlickLogin team is joining Google, a company that shares our core beliefs that logging in should be easy instead of frustrating, and authentication should be effective without getting in the way. Google was the first company to offer 2-step verification to everyone, for free – and they’re working on some great ideas that will make the internet safer for everyone. We couldn’t be more excited to join their efforts,” reads a post on SlickLogin’s website.

It remains to be seen if, or how, Google might add the company’s work to its lineup of web services — and perhaps even the Android operating system itself. If it does, it could make for a novel new way to authenticate into Google-based apps.

SlickLogin’s technology uses a combination of protocols to start the authentication process. WiFi, Bluetooth, or QR codes – to name a few – are used to verify that, yes, a user’s smartphone is located somewhat near one’s active desktop or laptop computer. After that, it gets fun: The computer emits a unique high-frequency, sound wave out of its speakers. A smartphone app recognizes the sound and authenticates that it’s actually you and your phone attempting to log into your credentials. Once it verifies you, your smartphone sends the green light to the site you were attempting to log into and off you go.

Those with fairly active technological imaginations can likely conjure up a host of possible ways that this play-a-sound login technology could go wrong. However, SlickLogin claims to have these issues under control. As reported by TechCrunch in September of last year:

“Everything is very heavily encrypted, so man in the middle attacks are out. You can’t record the audio signal and just play it back later, as the audio is uniquely tied to that moment. You can’t just hold your phone up to someone else’s audio signal (or grab it from across the room with a directional mic) in hopes of getting logged in to their account before they do; your phone wouldn’t have their login credentials stored on it, and that crucial bit isn’t wrapped into the sound. If anything, you’d just log them in to your own account.”
Of course, if somebody actually gets their hands on your smartphone, you’re hosed. Still, at least SlickLogin’s treatment lessens the amount you have to do, or type in, in order to access your favorite two-factor-secured sites.

Nvidia Unleashes First ‘Maxwell’-Class GPUs


Nvidia on Tuesday introduced a pair of new mid-range graphics processors, the GeForce GTX 750 Ti and GeForce GTX 750, the first in the graphic chip maker’s stable to use elements of its next-generation “Maxwell” architecture.

The real onslaught of Maxwell-class products will arrive later this year, but for now, these two new GPUs for desktop PCs offer a teaser into the performance-per-watt gains Nvidia is getting with the architecture that succeeds its “Kepler” generation of GPUs.

For example, Nvidia claimed that the GTX 750 Ti more than doubles the performance of the older GeForce GTX 550 Ti, while using just over half the power to get there. The upshot is that it’s now possible to build a mini-ITX gaming PC using the GTX 750 Ti, which gives gamers 51 frames-per-second on just a 60-watt power draw, according to Nvidia.

Both of the new Maxwell GPUs are 28-nanometer chips. The GTX 750 Ti sports 640 Cuda cores with a base clock of 1.02GHz, which can be throttled up to almost 1.1GHz. Nvidia is offering the GPU in 1GB or 2GB memory configurations—memory speed is 5.4 Gbps for the 128-bit GDDR5 memory. The GTX 750 Ti supports DirectX 11.2 and OpenGL 4.4, and 16 lanes of PCI Express 3.0.

The new GeForce GTX 750 delivers 58 fps performance while drawing just 55 watts. It has 512 Cuda cores with a base clock of 1.02GHz, which is boostable to nearly 1.1GHz. With 1GB of 128-bit GDDR5 memory, the GTX 750 has half the memory of the 750i in its full configuration, offering memory speed of 5.4 Gbps. It also supports DirectX 11.2 and OpenGL 4.4, and 16 lanes of PCI Express 3.0.


Nvidia is releasing the new GTX 750 Ti and GTX 750 just as rival Advanced Micro Devices introduces its own new mid-priced Radeon GPUs using its Graphics Core Next architecture. AMD’s new Radeon R7 250X and Radeon R7 265 look to be a bit cheaper than Nvidia’s next-gen GPUs, but they also draw considerably more power than their GeForce counterparts.

The GeForce GTX 750 Ti is priced at $149 and the GeForce GTX 750 is priced at $119.


Business interests push for more unlicensed spectrum


WifiForward group brings together cable operators, tech companies

By John Cox, Network World

A new industry group has unintentionally thrown light on the business interests that underlie debates over U.S. spectrum policy.

The group, calling itself WifiForward, says it will work to expand the amount of unlicensed spectrum, which can be used by Wi-Fi, Bluetooth, and Zigbee radios for streaming video, talking to your smartphone, home heating/cooling sensors and a host of other uses.

Its membership consists of cable companies, their equipment suppliers, consumer electronics business groups and retailers, specialized “advocacy” (or lobbying) groups, and technology companies including Microsoft and Google. (A complete list is available on the group’s website.) Notable by their absence are other broadband companies, both wireline and wireless carriers such as AT&T Wireless and Verizon Wireless, and other network operators.

WifiForward unveiled itself this week, with the self-appointed mission of “working to alleviate the Wi-Fi spectrum crunch and to support making Wi-Fi even better by finding more unlicensed spectrum.”

“WifiForward’s mission is to educate all of those groups [consumers, policymakers, representatives of non-tech industries, and the media] and draw the connections very clearly between unlicensed spectrum with innovation, investment and job growth,” according to a spokeswoman, via email. “We’ll do that by being the voice for unlicensed technologies and telling the stories of these technologies in use across all industries.”


“The coalition will not file in proceedings at the Commission, nor lobby members of Congress directly,” according to the email.

Both Comcast and Time Warner are members of Wi-FiForward. Both have aggressive Wi-Fi offerings, in part to compete with the wireline and wireless telco carriers. Comcast offers subscribers access to over 500,000 hotspots nationwide, can pay for hourly, daily or weekly Wi-Fi passes. Time Warner offers subscribers free access to over 200,000 hotspots, its own and those of partners.

But WiFiForward seems to be preaching to the already-converted, as there’s little evidence of a debate about the virtues of expanding the unlicensed spectrum. Over a year ago, then-FCC Chairman Julius Genachowski announced that the agency would work quickly to add 195MHz of spectrum for Wi-Fi in the 5GHz band, increasing the available capacity by 35 percent. According to Genachowski, the action is needed because Wi-Fi faces a likely spectrum crunch, analogous to the much-discussed cellular spectrum crunch. Current FCC Chairman Tom Wheeler reiterated that commitment last month.

Another spectrum bonanza opened by the FCC, the so-called white spaces of unused TV signals, is already being explored in pilot networks springing up in the U.S. and around the world. A typical example is a recent network deployed at West Virginia University. [See “Better than TV! White spaces bring wireless bonanza to West Virginia”] Wi-Fi clients connect to an access point, which then interfaces with a white spaces radio, in effect piggybacking on the white spaces spectrum. Eventually, radio chips will let clients directly access these bands.

500,000 Belkin WeMo users could be hacked; CERT issues advisory – Unplug Your WeMo Devices!


When U.S. CERT comes knocking, it seems unwise for a company to stick its head in the sand and hide. But that’s reportedly what happened when the CERT division of the Carnegie Mellon Software Engineering Institute tried to contact Belkin about numerous vulnerabilities discovered in Belkin WeMo home automation devices.

Belkin WeMo security holes threaten over half a million users


CERT was contacted by researchers from IOActive after they uncovered “multiple vulnerabilities in Belkin WeMo Home Automation devices that could affect over half a million users.” Since Belkin failed to issue a fix for any of the flaws, IOActive “recommends unplugging all devices from the affected WeMo products.”

If you’ve dropped any money into WeMo products, such as Belkin WeMo switch and motion, WeMo Light switch, Insight switch and WeMo switch, then you are probably not pleased or fond of the idea of unplugging your WeMo versions of home automation. With apps for both Android and iOS to make setup quick and easy, WeMo products are some of the most popular home automation devices on the market. However, according to the CERT advisory for WeMo, “A remote unauthenticated attacker may be able to sign malicious firmware, relay malicious connections, or access device system files to potentially gain complete access to the device.” Furthermore, “We are currently unaware of a practical solution to this problem.”

There are five separate vulnerabilities listed in CERT’s advisory, starting with “Belkin Wemo Home Automation firmware contains a hard-coded cryptographic key and password. An attacker may be able to extract the key and password to sign a malicious firmware update.”

IOActive researchers published a five-page report [pdf] detailing the WeMo flaws, but warned in simple terms that the WeMo vulnerabilities “expose users to several potentially costly threats, from home fires with possible tragic consequences down to the simple waste of electricity.”

    Additionally, once an attacker has established a connection to a WeMo device within a victim’s network; the device can be used as a foothold to attack other devices such as laptops, mobile phones, and attached network file storage.

IOActive is far from the first to warn about WeMo’s hackability; in January 2013, researcher Daniel Buentello plugged a lamp into a WeMo switch and “made it blink like it was possessed, with the relay clicking on and off, faster and faster like it might blow up until it had a strobe effect.” In October 2013, a researcher highlighted security flaws in Belkin’s WeMo Switch, Wi-Fi NetCam and WeMo Baby that made eavesdropping easy.

Of course it’s not just WeMo; at the 2013 Black Hat Home Invasion v2.0 presentation, Trustwave researchers discussed poor security issues discovered when testing a Belkin WeMo Switch, Linksys Media Adapter, Radio Thermostat, and Sonos Bridge…as well as a $6,000 Satis smart toilet. In fact, hacking and attacking automated homes, targeting Zigbee and Z-wave wireless protocols, were hot topics in 2013 at Black Hat USA and Def Con. In August 2013, an attacker hacked a Foscam wireless IP camera to spy on and curse at a baby. TRENDnet IP cameras have been a Peeping Tom’s paradise since at least 2011.

The Internet of Things is expected to be “roughly equal to the number of smartphones, smart TVs, tablets, wearable computers, and PCs combined,” according to a forecast from BI Intelligence. There are currently about 1.9 billion IoT devices, but that’s predicted to reach 9 billion by 2018. Cisco predicts the IoT will grow to 50 billion devices by 2020. Have you ever stopped to wonder how many of those 9 – 50 billion IoT devices will be insecure and exploitable?

Belkin had better get its head out of the sand and patch these holes lickety-split because you know not everyone will hear about the flaws or bother to toss out their WeMo investment even if they do. If half of the people don’t, and WeMo is hacked or were to cause fires in all those, about a quarter of a million homes…now that would be an ugly lawsuit. Get busy, Belkin!

Agilent Technologies and FIME to Deliver Advanced Test Tools for Mobile Payments


Agilent Technologies Inc. (NYSE: A) and FIME, an advanced secure-chip testing provider, have announced an agreement to work together to deliver pioneering testing solutions to the payments and telecommunications markets.

The advancement of innovative tools to test and certify secure-chip mobile payment technology will be a key priority for the two organizations. This work will capitalize on the combined knowledge of FIME’s established expertise in the payments sector together with Agilent’s strong heritage in the telecommunications industry.

Pascal Le Ray, general manager of FIME, said, “Over the past 12 months we have continued to open new test centers around the world to be closer to our stakeholders, better understand their needs and respond quickly to their requirements. We look forward to enhancing our product offering with Agilent as part of our ongoing expansion and to meet the evolving demands of the industry.”

“The next generation of smartphones is bringing mobile payment and mobile communications together and into the mainstream,” said Joe DePond, vice president and general manager, Agilent’s Mobile Broadband Operation. “Working together with FIME, we will provide test solutions to help customers deliver new products that will delight consumers around the world.”

About FIME

FIME is a trusted consultant and advanced end-to-end testing services provider within the payment, mobile telecom, e-ID and transit sectors. Its work ensures the successful and efficient market integration of products and solutions which use secure chips. Its wealth of testing knowledge and skills accelerates product time to market and promotes security, interoperability and confidence that products will deliver optimum performance once launched.

FIME has extensive EMV testing expertise working with banks, technology providers and authorities to develop the testing frameworks for international and domestic EMV-compliant payment schemes. FIME partners with leading payment schemes and industry bodies to provide certifications and enhance the secure-chip ecosystem: American Express, Discover, eftpos, EMVCo, First Data, GCF, GlobalPlatform, GSMA, Interac, Isis, JCB, MasterCard, Network for Electronic Transfers (NETS), NFC Forum, National Payments Corporation of India (NPCI), National Standard for Chip Card Specification (NSICC), OSCar Consortium and Visa.

http://www.fime.com | Twitter | LinkedIn

About Agilent Technologies

Agilent Technologies Inc. (NYSE: A) is the world’s premier measurement company and a technology leader in chemical analysis, life sciences, diagnostics, electronics and communications. The company’s 20,600 employees serve customers in more than 100 countries. Agilent had revenues of $6.8 billion in fiscal 2013. Information about Agilent is available at http://www.agilent.com.

On Sept. 19, 2013, Agilent announced plans to separate into two publicly traded companies through a tax-free spinoff of its electronic measurement business. The new company is named Keysight Technologies, Inc. The separation is expected to be completed in early November 2014.

Intel invests in Chinese cloud providers, eyes makers of components for wearables


IDG News Service (Beijing Bureau) — The venture capital arm of Intel has invested in three Chinese cloud providers to take advantage of the country’s booming market for the services, and is also looking at backing local firms involved in creating wearable devices.

Intel Capital said Tuesday it is investing an undisclosed amount in Shanghai Yeapoo Information Technology, Tianjin Zhongke BlueWhale Information Technology and Wuxi China Cloud Technoogy Service.

The chip maker is making the investment as its server business in China has witnessed double-digit growth over the last five years in the country. Cloud computing services have helped fuel the demand, as the nation’s companies and municipal governments move toward building services around the Internet and mobile devices.

“The age of cloud computing has come,” said Xu Shengyuan, general manager of Intel Capital China, on Tuesday. “We all know that cloud computing is a major opportunity,” he added.

Yeapoo, one of the companies Intel has invested in, is helping Chinese businesses build websites specifically for mobile devices. In addition, Yeapoo offers ways for companies to market their online services, and analyze incoming customer data.

BlueWhale provides network storage products, while Wuxi China Cloud is a major cloud infrastructure provider, operating its servers in data centers across the nation.

Outside of cloud computing, Intel Capital is also focused on investing in China’s wearables industry. Any investment, however, would be geared more toward component makers, such as those that specialize in making sensors, Xu said.

“We won’t necessarily invest in a product a consumer ends up buying,” he added. “We are still a technology company, so we are more interested in the technology.”

Virtual Keyboard Technology? It’s about time…or too soon?


By Danny Stieben

A Cool Product Review


The Celluon EPIC is a virtual keyboard by means of projection, meaning that it can use laser light to project an image of a keyboard onto a flat surface in front of it. It can then detect whenever you “press a key” on the flat surface and translate that into a keystroke that the computer actually understands. Long story short, it’s a fancy keyboard that looks futuristic and can theoretically provide some benefits (primarily space and weight when compared to a regular Bluetooth keyboard). You can get it off of Amazon for $149.99


Surprisingly, there doesn’t seem to be any competitors to the Celluon EPIC. Celluon makes the claim that it’s the world’s only virtual projection keyboard, and that claim appears to be true. The only other product that can be found is the Celluon Magic Cube ($150), which just seems to be the EPIC’s predecessor — but it could be seen as the only competitor since it’s still being sold. This is pretty rare, however, since the Celluon EPIC is already a few years old. But, if any other alternatives do exist somewhere, then they aren’t major enough to really be found by any consumer for purchase.

The Celluon EPIC Virtual Keyboard comes with the follow specifications:

    19mm key pitch
    Overall keyboard dimensions: 100mm height, 240mm width
    Device dimensions: 70mm x 35mm x 20mm
    Available as English QWERTY and German QWERTZ
    Recognition rate: 350 characters per minute (or 70 words per minute)
    Light source: red laser diode (IEC Class 1 Laser)
    660 mAh battery
    2 hours of use on battery, 3 hours to charge the battery
    Connect via Bluetooth
    Compatible operating systems: iOS 4+, Android 4.0+, Windows XP+, Mac OS X, Blackberry 10. Linux is also compatible



Using the Celluon EPIC is a very interesting experience. The keyboard that is emitted onto the surface in front of it is decently sized, and includes all of the keys you’ll care about — it’s nearly the same to a full-sized keyboard, but not quite. You’ll find a few buttons such as apostrophes, quotation marks, and more along the very top where you’d find the F1-F12 keys on a normal keyboard. The keyboard projection is a bit faint in well-lit areas, but darker areas will provide a well-defined keyboard.

To get the keyboard to function correctly, you have to make absolutely sure that you’re using a flat, opaque surface. I tried using it once on a table that had a layer of glass on top of the wood, and it didn’t function exactly as it should — it thought I was pressing a neighboring key instead. However, once the conditions are right, the keyboard is actually pretty usable. The keys are recognized, and the maximum typing speed is acceptable (it’s still quite a bit lower than my normal typing speed, but it’s tolerable).

According to the manufacturer, keystrokes are recognized and then relayed via an invisible infrared layer combined with an optical sensor. The recognition process works as follows: When the user presses a key on the projected keyboard, the infrared layer is interrupted. This produces UV reflections that are recognized by the sensor in three dimensions, allowing the system to assign a coordinate (keyboard character). Whenever I cover the optical sensor, the device doesn’t detect any keystrokes, so this seems to be correct.

Feedback Solution (And A New Problem)

One of my biggest concerns about this virtual keyboard was the fact that I wouldn’t actually have any tactile feedback, which is a criticism of most keyboards that don’t have traditional physical keys that depress. However, I was very surprised — the Celluon EPIC accommodates for this by beeping every time it recognizes a key press. Since this is still sensory feedback, it isn’t quite as awkward as no feedback at all.

The beeps do lead to another problem though — it can get annoying for anyone else around you. If you were to say use this in a classroom setting where quietness is a must, this is an absolute deal-breaker. You’d quickly annoy everyone in that class as well as the teacher. Even worse, there’s no way to turn the beeping off — there’s no settings page that you can access, and there’s only the on/off switch for the entire device. There’s nothing that would control the beeping. Sadly, while I think this solution is pretty clever, it creates another problem that in some ways may even be worse than the original issue.
Typing Speed

Like I mentioned above, the keyboard is surprisingly accurate in the keystrokes it detects, but that detection comes at the price of typing speed. Typing at 70 words per minute may sound like a lot, but most people don’t type at a consistent speed for an entire minute. They have bursts of quicker and slower typing, and once you have one of your faster bursts, the projection device will start to miss some of those keystrokes. To give a rough approximation, it cannot detect anything faster than someone who’s become fairly proficient at typing on their smartphone or tablet’s on-screen keyboard.
Satisfying the Flat, Opaque Surface Requirement

Do you always have a large, flat, opaque surface available wherever you go? Chances are the answer to this is no. In school, a lot of desks I sit at are quite small, so there wouldn’t be any room for a projected keyboard — or if there was, then there wouldn’t be enough room to also have a tablet standing on the same desk.

Also, a lot of times you’ll probably be using a tablet or computer on your lap, while you’re sitting down, not in front of any sort of table. In scenarios like this, you won’t be able to use the EPIC. Having a flat, opaque surface is a hard requirement, or else it simply won’t function. While it works well in the conditions required by the device, I don’t often come across those conditions while I’m on the go — if I’m at home, I’d rather be using a regular full-sized keyboard rather than the EPIC.


So, is the Celluon EPIC an awesome piece of technology? It sure is! Is it something you should put on your shopping list? I don’t think it is. While it’s a really neat concept and it actually works surprisingly well, it’s still far from being a product that makes it an obviously better option to a regular Bluetooth keyboard. Simply put, it’s too expensive, the beeping can’t be turned off, and the flat, opaque surface requirement may be quite difficult to satisfy for some users. I really think that the drawbacks outweigh the benefits.

Colorfront showcases Landmark Cloud Innovations


BUDAPEST, Hungary, Feb. 18, 2014 /PRNewswire-iReach/ Colorfront (www.colorfront.com), the Academy and Emmy Award-winning developer of high-performance, on-set dailies and transcoding systems for motion pictures, high-end episodic television and commercials, is showcasing the latest advances of its ground-breaking Colorfront Cloud Services at HPA’s Tech Retreat, Palm Springs, and at NAB 2014, Las Vegas.

Announced in October last year, Colorfront Cloud Services enable powerful and streamlined distributed post-production, and have been developed by Colorfront in collaboration with leading Hollywood studios and post-production facilities. Colorfront is now also working with leading broadcasters and content producers in the UK, Europe, Japan, Brazil and the US, as they adopt large-sensor cameras for Ultra HD production.

Colorfront Cloud Services combine core elements of Colorfront’s industry-leading Transkoder engine technology, which is optimized for use on flexible, high-performance, cloud-based computing options.

Landmark innovations include Colorfront’s demonstration of faster than realtime remote upload and processing of ProRes 4444 ARRI Alexa television footage, during Colorfront’s Supersession User-Group in Los Angeles last November. In December, the company successfully demonstrated realtime, digital cinema quality, 4K JPEG2000 playback directly from Amazon’s S3 (Simple Storage Service) public cloud storage to a 4K DCI cinema projector, at the CineGrid Convention in San Diego.


Most recently, Colorfront Cloud Services successfully rendered several hours of original 4K RAW camera footage to a range of broadcast deliverable formats, via Amazon Web Services’ GPU-enabled servers running Transkoder engine.

Colorfront will reprise all of these advances, plus new cloud-based EDL conforming and automatic visual effects pull capabilities, during HPA’s Tech Retreat, 17-21 February and NAB 2014, 7-10 April. A further highlight at HPA is the participation of Colorfront CTO Bill Feightner in a panel discussion entitled “Virtual/Distributed Post” on February 20.

“Colorfront Cloud Services is advancing rapidly, and the results are astonishing,” said Bill Feightner CTO of Colorfront. “We have uploaded 4K material to secure cloud servers in Ireland and achieved realtime playback in Budapest. The same speeds and capabilities are also available across the US, enabling fast access to production media securely from multiple locations. This is great news for motion picture producers, and leading broadcasters who are also now evaluating how Colorfront Cloud Services can be optimized for broadcast deliverables from the latest large sensor digital cinematography cameras manufactured by ARRI, Sony, RED and Canon.”

Colorfront Cloud Services can be implemented in several flexible ways. Users have the choice of basing their service around Colorfront’s own secure cloud service in Los Angeles, a preferred private data center or facility house, or a range of public, cloud-based servers, such as Amazon Web Services.

Encryption provides security of all production media. Users can securely upload original camera footage, and associated metadata, to a choice of cloud servers, and perform and automate a range of key tasks. Initial services offered include dailies transcoding, deliverables rendering, EDL auto-conforming and VFX pulls.

Colorfront has already achieved considerable success in motion picture and high-end episodic TV, with its Primetime Emmy Award-winning On-Set Dailies and Express Dailies systems, which share many Transkoder’s key technologies. The 2014 Oscar-nominated movies Nebraska, Captain Phillips, Gravity, Prisoners, Rush and The Wolf of Wall Street, along with hit TV series Dracula, True Blood, Strike Back and Mad Men have all utilized Colorfront systems.

About Colorfront: Colorfront, based in Budapest, Hungary, is one of Europe’s leading DI and post production facilities. The company was founded by brothers Mark and Aron Jaszberenyi, who together played a pivotal role in the emergence of non-linear DI. The company’s R&D team earned an Academy Award for the development of Lustre, Autodesk’s DI grading system, and a Primetime Engineering Emmy for the Colofront On-Set Dailies. Combining this in-depth expertise with a pedigree in the development of additional cutting-edge software, Colorfront offers today’s most advanced technologies for scanning and recording, DI grading, conforming, digital dailies, VFX, online and offline editing, cinema sound mixing, mastering and deliverables. For further information please visit www.colorfront.com.

Broadcom and QLogic Announce Sale and Purchase of Certain Ethernet Controller-related Assets and Entry Into ASIC Partnership


Broadcom and QLogic Announce Sale and Purchase of Certain Ethernet Controller-related Assets and Entry Into ASIC Partnership Broadcom Corporation (BRCM), a global innovation leader in semiconductor solutions for wired and wireless communications, and QLogic Corporation (QLGC), a leading supplier of high performance network infrastructure solutions, today announced a definitive agreement under which QLogic will acquire certain 10/40/100Gb Ethernet controller-related assets and non-exclusive licenses to certain intellectual property relating primarily to Broadcom’s programmable NetXtreme II Ethernet controller family. Total deal consideration is approximately $147 million in cash. In connection with the transaction, Broadcom and QLogic will enter into a long-term supply agreement whereby Broadcom will become ASIC supplier to QLogic in support of the NetXtreme II product line.

“This transaction is a win-win for our customers. Broadcom is focusing internal Ethernet controller efforts on strengthening its end-to-end data center platform while establishing a long-term ASIC supply relationship with QLogic in support of NetXtreme II Ethernet controllers,” said Rajiv Ramaswami, Executive Vice President and General Manager of Broadcom’s Infrastructure and Networking Group. “This transaction enables customers to be served without disruption by a leading partner, allows Broadcom to provide a broader solution portfolio overall and creates value for our shareholders.”


“We are pleased to enter into this partnership with Broadcom,” said Prasad Rampalli, President and Chief Executive Officer of QLogic. “QLogic gains world-class technology, an immediate presence serving enterprise customer Ethernet controller needs and an important long-term partnership to deliver end-to-end solutions. Going forward, this acquisition will form the foundation of our Ethernet controller business and accelerates our time-to-market with leading-edge technology.”

Concurrent with the closing, it is expected QLogic will license certain Broadcom patents under a non-exclusive patent license agreement that will cover QLogic’s Fibre Channel products in exchange for a license fee of $62 million.


The transaction has been approved by the boards of directors of Broadcom and QLogic and is subject to customary closing conditions. The transaction is expected to close in the first quarter of calendar 2014.

Excluding potential one-time gains related to this asset sale, Broadcom expects the transaction to be slightly accretive to earnings per share in 2014.

QLogic expects this transaction to be immediately accretive to revenue and non-GAAP earnings per share.

QLogic to Host Investors’ Conference Call

QLogic will host a conference call for analysts and investors today at 2:30 p.m. Pacific Time (5:30 p.m. Eastern Time) to discuss the transaction. The call and materials to be presented will be webcast live via the Internet at http://ir.qlogic.com. Phone access to participate in the conference call is available at (888) 713-4487, passcode 6366233.

The transaction information that QLogic intends to discuss during the conference call will be available on QLogic’s website at http://ir.qlogic.com for twelve months following the conference call. A replay of the conference call will be available via webcast at http://ir.qlogic.com for twelve months.

About Broadcom

Broadcom Corporation (BRCM), a FORTUNE 500 company, is a global leader and innovator in semiconductor solutions for wired and wireless communications. Broadcom products seamlessly deliver voice, video, data and multimedia connectivity in the home, office and mobile environments. With the industry’s broadest portfolio of state-of-the-art system-on-a-chip solutions, Broadcom is changing the world by connecting everything. For more information, go to www.broadcom.com.

About QLogic – the Ultimate in Performance

QLogic (NASDAQ:QLGC) is a global leader and technology innovator in high performance server and storage networking connectivity products. Leading OEMs and channel partners worldwide rely on QLogic for their server and storage networking solutions. For more information, visit www.qlogic.com.

Virtustream Announces Acquisition of ViewTrust Technology, Inc.


Virtustream, a leading provider of enterprise-class cloud solutions, today announced that it has acquired privately held ViewTrust Technology, Inc. (ViewTrust), an advanced security and compliance technology firm specializing in enterprise risk, cyber security and compliance solutions for government and enterprise customers worldwide. The acquisition closed on February 4th 2013. ViewTrust is headquartered in Falls Church, VA. The terms of the transaction were not disclosed.

“As organizations continue to move their most critical enterprise class workloads to the cloud, security and compliance must be priority number one,” said Rodney Rogers, chairman and CEO, Virtustream. “Virtustream’s acquisition of ViewTrust assures our customers that we can provide a real time, 360 degree view of their entire IT environment, allowing them to manage their enterprise risk while complying with regulatory requirements.”


As a result of the acquisition:

    Virtustream will expand its software portfolio to include ViewTrust’s line of products focused on enterprise risk management (ERM), cyber situational awareness and compliance.
    The ViewTrust capabilities will be further embedded in Virtustream’s xStreamTM cloud management software (private, public and hybrid solutions for enterprises and service providers).
    Virtustream’s IaaS service offerings will be enhanced with advanced security including enterprise governance, risk and compliance (GRC) services, security (SIEM) and risk mitigation tools (real-time compliance) to help customers, including intelligence and defense focused enterprises, meet the requirements of advanced security and compliance audits as well as post compliance continuous monitoring and reporting.

“The acquisition underscores the virtue of security and compliance in bringing risk-averse enterprises to production clouds, particularly those that are subject to stringent regulatory requirements,” said Agatha Poon, research manager at 451 Research. “As Virtustream continues to raise the security bar for its xStream cloud management software and IaaS offerings, adding ViewTrust Technology to its arsenal will go a long way towards shoring up its credentials as a challenger brand.”


The relationship between the two companies developed in 2012 when Virtustream integrated ViewTrust’s GRC and Risk Management solution into xStream to enable commercial enterprises and government agencies to automate the pre- and post-audit requirements, as well as ensure continuous compliance monitoring. Built on a modular reference architecture which supports both SQL and NoSQL (Big Data) data stores with conformance to the CAESARS (Continuous Asset Evaluation, Situational Awareness and Risk Scoring) framework and NIST 800-137, these tools solve the problem of Continuous Enterprise Risk and Compliance Management by ingesting and analyzing massive amounts of data, to help enterprises meet, not only the requirements of a compliance audit, but also support post compliance analytics, monitoring and reporting.

By integrating a comprehensive GRC, SIEM and Risk Management offering into our service, we will save our clients money, reduce deployment time to minutes, and ensure their environments are continuously monitored and secure,” added Kevin Reid, CEO and CTO, Virtustream.

ViewTrust’s suite of products will also enhance the Virtustream software portfolio both as stand-alone products and a fully integrated solution into Virtustream’s xStreamTM cloud management software. ViewTrust’s assets will strengthen the xStream product suite by adding security-related IP to both the hybrid and core policy engine. Virtustream will be able to help customers manage and monitor compliance with leading cloud certifications and compliance frameworks, including FedRAMP, FISMA, DIACAP, ICD 503, CSA, SSAE16, PCI-DSS 2.0, SOX/GLBA, HIPAA HITECH, NERC CIP, IS0 27001-2005/2013, and others – in the customer’s environment as well as any xStream powered cloud.

Virtustream will continue to distribute ViewTrust’s Analytics and Continuous Monitoring Engine (ACE) platform, as well as its ComplyVision, ThreatVision, LogVision, and AssetVision software solutions. These software solutions provide Enterprise Risk Management (ERM/xGRC) services integrated with compliance management, cyber security and threat impact analysis to commercial and government clients. ViewTrust currently provides these ERM solutions to such clients as U.S. Department of State, U.S. Department of Treasury and TWD & Associates.

“We’re very excited to join the Virtustream team,” said Kaus Phaltankar, CISSP, CISA and CEO of ViewTrust Technology, Inc. “Virtustream’s xStream software is already architected to the highest security and compliance standards. By combining our technologies, Virtustream and ViewTrust will be even more capable of assisting customers in meeting risk management requirements for the enterprise and cloud deployments, as well as managing and complying with a wide variety of complex regulatory compliance reporting requirements.”

About Virtustream

Virtustream is the enterprise-class cloud software and service provider trusted by enterprises worldwide to migrate and run their mission-critical applications in the cloud. For enterprises, service providers and government agencies, only Virtustream’s xStream cloud management software and Infrastructure-as-a-Service (IaaS) meet the security, compliance, performance, efficiency and consumption-based billing requirements needed to move complex production applications to the cloud – whether private, public or hybrid. The company is headquartered in Washington, D.C. with offices in San Francisco, Atlanta, London and Dubai. Virtustream owns data centers in the U.S. and Europe with service provider partner data centers in Latin America, the Middle East and Asia.


Sony Reveals Digital Entertainment Apps Available on PlayStation 4 at Launch – From Netflix to Amazon, NHL & More!

Sony Computer Entertainment America LLC (SCEA) today announced a line-up of digital entertainment apps that will be available on the PlayStation4 (PS4) computer entertainment system in the U.S. beginning November 15, 2013. Leveraging the power of PSN SM, PS4 gamers will have access to a rich portfolio of services starting on day one that feature the hottest movies and television shows, unique specialized content, and live sports programming. All entertainment apps can be found in PlayStationStore.

FireHost Introduces HealthData Repository, a HITRUST Certified Secure Cloud Infrastructure That Meets HIPAA Guidelines


FireHost, the secure cloud hosting company, today announces its new HealthData RepositoryTM, a HITRUST-certified private cloud infrastructure that protects regulated healthcare data. By decoupling electronic health records (EHR) and electronic protected health information (ePHI) from monolithic IT environments, HealthData Repository reduces the number of in-scope systems, therefore shortening the length and cost of compliance audits, while strengthening cloud security and performance. FireHost works with more than 150 healthcare focused organizations and its cloud infrastructure hosts millions of health data records.

Kurt Hagerman, FireHost’s chief information security officer, said that HealthData Repository offers enhanced security by isolating regulated data from the often monolithic infrastructure and broad administrative permission sets. In addition to providing control over access credentials, the service delivers pure cloud agility and scalability, allowing resources to be provisioned and decommissioned on demand.

“Every healthcare organization has a multitude of records to protect, including data on paper, in email and scattered digital files. HealthData Repository isolates the most sensitive datasets from the general IT environment, while keeping it available via secure remote connections and decisive administrative permissions,” stated Hagerman. “And just as importantly, by keeping regulated data protected in a secure pod, HealthData Repository helps ensure the continuity that’s critical for healthcare environments and patient initiatives.”

Hagerman added that HealthData Repository offers multi-site capabilities across all FireHost data center facilities while maintaining data sovereignty. Because it’s a HITRUST-certified infrastructure, customers enjoy a lighter compliance burden, less procedural documentation and faster audits. With low latency and multiple points of presence for global redundancy, the service offers a highest-performing private cloud infrastructure, while FireHost’s Intelligent Security ModelTM eliminates the need to waste memory and processor resources on unwanted traffic.


“At Ai Cure Technologies, we are implementing state-of-the-art facial recognition software to confirm medication adherence on mobile devices. This platform is designed for use in high-risk patient populations and clinical trials. The data we gather must be kept secure, compliant with regulatory requirements, and available for real-time analysis,” said Adam Hanina, chairman and CEO of Ai Cure Technologies. “We have been very happy with FireHost’s HealthData Repository as it offers military-grade security and global locations that align well with national and global deployment.”

HealthData Repository is designed to benefit a variety of healthcare IT entities; some of the most successful implementations to date have been for:

    Research Organizations: academic and research institutions can use HealthData Repository to boost performance and service continuity, scale resources on demand and keep sensitive research data protected while lowering compliance costs and hassles.
    Healthcare IT Consultants: HealthData Repository helps IT providers control costs while providing access to security and performance their clients need.
    EHR Solution Providers and SaaS Providers: Instead of investing in managing infrastructure to the latest security and compliance standard, software developers can spend time on higher priorities involving end user engagement and delivering


exceptional healthcare solutions.

While these organizations are providing exciting new use cases for the service, hospital systems, public healthcare organizations and non-profit companies in the U.S. and UK are improving their information security posture and easing compliance burdens with the HealthData Repository. The Health Information Trust Alliance (HITRUST) is one such example.

“For health information systems and exchanges to be broadly adopted, security must be at the core of how healthcare-focused organizations work with patient data. FireHost’s HealthData Repository is a strong example of how these organizations can protect their data, making it easier for them to comply with federal regulations such as HIPAA,” said Michael Frederick, vice president of assurance services at the Health Information Trust Alliance (HITRUST). “It makes sense that FireHost created this service offering – they have continually been ahead of the curve when it comes to protecting healthcare data and were one of the first cloud service providers to achieve HITRUST CSF certification. Our faith in FireHost’s HealthData Repository is so strong, we utilize the service to protect our MyCSF tool.”

The company will preview HealthData Repository at the FireHost Booth (#3886) at the HIMSS14 Annual Conference & Exposition in Orlando from February 23 – 27, 2014.

About FireHost

FireHost offers the most secure, managed cloud IaaS available, protecting sensitive data and brand reputations of some of the largest companies in the world. With private, cloud infrastructure built for security, compliance, performance and managed service, responsible businesses choose FireHost to reduce risk and improve the collection, storage and transmission of their most confidential data. FireHost’s secure, managed cloud IaaS is available in Dallas, Phoenix, London, Amsterdam and Singapore, and offers robust, geographically redundant business continuity options across all sites. Based in Dallas, FireHost is the chosen secure private cloud service provider for brands that won’t compromise on the security of their payment card, healthcare, and other regulated data. www.firehost.com

Your next smartphone is now closer to wireless charging


The consolidation of two groups should help the industry standardize — and spread

Computerworld – The agreement announced Tuesday that allows two of the three major wireless-charging consortiums to share their specifications means there’s one less obstacle in the way of a wary mobile device industry looking to adopt the technology.

On its face, the merger between the Alliance for Wireless Power (A4WP) and the Power Matters Alliance (PMA) should allow them to create a consistent wireless charging experience for consumers with enabled devices.

That means consumers would no longer have to worry about whether their smartphone or tablet adheres to the A4WP’s Rezence or the PMA’s Powermat specification when they drop it onto a wireless charging spot in Starbucks, McDonald’s or at an enabled store.

However, the partnership between two specification groups still leaves a horse race, with the winner determining which wireless charging technology will cross the finish line.

Like the Blu-Ray Disc and HD DVD standards (and Betamax and VHS before that), one wireless charging specification will eventually win out. Which one does will be decided by device manufacturers, not industry groups.

But, the partnership has brought together A4WP’s magnetic resonance (loosely coupled) charging and PMA’s inductive (tightly coupled) charging. Basically, resonance allows for multiple devices to charge at once on a pad in any configuration. Inductive charging requires an enabled device to be more precisely placed on a pad before it will charge.

Mark Hunsicker, senior director of product management at Qualcomm, demonstrates Rezence wireless charging in an IKEA coffee table.

The A4WP already had both inductive and resonant charging as part of its Rezence specification. But, the tie-up between two rivals also strengthens them against an older and larger challenger – the Wireless Power Consortium (WPC). The WPC owns the widely adopted Qi specification, which is based on inductive wireless charging, but the group also demonstrated the added ability to do resonant charging at this year’s CES show.

The Qi specification, which also includes resonance and inductive technology, is supported by 200 companies, among them a veritable who’s who of electronics, including LG Electronics, Sony, Nokia and Verizon Wireless. “Qi has been dominant and is way ahead of the game,” said William Stofega, a program director at research firm IDC.

There are now more than 400 Qi-enabled devices today, including mobile phones, such as the Samsung Galaxy S4 and S3, Nokia Lumia 1020, LG G2, Motorola Droid Maxx and Mini and the Google Nexus 5 phone and Nexus 7 tablet.

“Almost everything [enabled for wireless charging] that shipped last year was compliant with the WPC specification,” said Ryan Sanderson, the associate director Power Supply and Storage Components at research firm IHS.

Even so, the WPC will undoubtedly be concerned about the alliance between the PMA and the A4WP, “just because of their market share,” Sanderson added.

Mobile device makers, such as Samsung and LG Electronics, see wireless charging as a plus, given that consumers would likely prefer a phone with wireless charging over one without.

Ongoing consolidation

Consolidation within the wireless charging industry isn’t new. Last year, Duracell’s Powermat Technologies subsidiary announced a merger with its European counterpart, PowerKiss, in a deal that brought two disparate wireless power specifications together under one umbrella. Both companies fall under the PMA consortium.

PMA was born from Powermat, which claims it has more than 1,500 charging spots in the U.S. In Europe, PowerKiss said it has 1,000 charging spots in airports, hotels and some McDonald’s restaurants.

So for the A4WP, which already had both resonant and inductive wireless charging in its specification, the partnership is more about increasing its customer base as well as adding smart technology.

The PMA’s specification includes an API that monitors the power that’s transmitted, and can manage pre-specified policies, such as how much power any device requires before it’s fully charged.

Daniel Schreiber, president of Powermat and a board member of the PMA, said Powermat’s inductive technology is more efficient than resonant charging, making it preferable for places like a coffee shop that doesn’t want to waste power.

Consumers also may be more nervous about having their mobile devices charge next to a stranger’s, Schreiber said, making inductive charging’s single device limitation more attractive.

“They’re highly complimentary implementations, much like WiFi and 4G,” Schreiber said, referring to magnetic induction and resonant charging. “They’re not displacing each other, but complimentary to one anther.”

Not everyone agrees.

The end game will be resonance-only wireless charging with machine-to-machine data transfer, according to Reinier van der Lee, director of product marketing at Broadcom. “We always felt resonant technology was the way to go, but we also feel the [PMA’s] inductive install base needs to be offered a transition path to resonant charging,” van der Lee said.

An example of Texas Instrument’s wireless charging coil and chip technology. The device can be much smaller and would be the electrical receiver in a mobile device.

Broadcom, a member of the A4WP, plans to unveil a chipset later this year that will include wireless power management capabilities. Texas Instruments already makes wireless charging chipsets.

John Perzow, the vice president of market development for the WPC, said the rival organizations joined forces after realizing their own products could not address the entire market. But the PMA and A4WP will have to make major tradeoffs to achieve interoperability between their technologies.

“For instance, you can always shoehorn two technologies in one phone, a so-called ‘dual-mode’ approach. But this increases cost and complexity and typically requires tradeoffs like lower efficiency,” Perzow said.

IHS’s Sanderson is hopeful that all three wireless power consortiums can eventually work together on universal standards. Until then, handset, tablet and other electronic device manufacturers will remain wary about choosing one technology over another, fearful they’ll make the wrong bet.

Perzow said the WPC is in talks with the PMA and the A4WP.

“But let’s be clear,” he said. “What PMA and A4WP announced is not one merged group. They both are filling gaps with technology the other didn’t have,” Perzow said.

“When you buy Qi, you know it will work with whatever technology and features evolve down the road,” he continued. “Keeping this compatibility is a top goal, and we’re very interested and eager to cooperate with anyone that shares that goal, Including PMA and A4WP.”

Toyota Begins Testing Wireless Charging System

Toyota will begin verification testing of its newly developed wireless charging system in late February, 2014. Toyota developed the system in cooperation with WiTricity, an MIT spin-off that the automaker has been cooperating with for several years.

Toyota Managing Officer Satoshi Ogiso announced in August that the next-generation Prius Plug-in Hybrid would include a wireless charging option.

Apple’s iBeacon turns location sensing inside out


‘Where am I?’ becomes ‘Here I am!’ with Apple iBeacon technology

Apple’s iBeacon location sensing technology, based on the Bluetooth radio in your iPhone, promises to personalize the world around you. For users, this increasingly popular technology changes the question of “Where am I?” into the announcement “Here I am!”

An iBeacon is a Bluetooth Low Energy radio that broadcasts a signal in a given area, say the doorway to a clothing or grocery store. Your iPhone – if it has Bluetooth 4.0, and the radio is turned on, and iOS notifications and location services are active – can detect that signal and query the beacon. The beacon uses radio signal strength to figure out the phone’s location and can share that with iOS. Your phone shows an invitation from the beacon to enable something like “in-store notifications,” which involves sharing your Bluetooth-determined location.

If you accept, your phone downloads an app – over a Wi-Fi or cellular link – and the store can then send you stuff, such as coupons or special offers, or provide services such as buying advice, product ratings, or an updated loyalty card, as you move within range of different beacons throughout the store.

By itself, an iBeacon can’t track you. Its job is to create a kind of electronic tripwire that sets up a connection through your iPhone between you and a backend server of some kind. Only then, can “the system” see where you are, deduce something about your interests, and present information tailored to you and your position.

Apple itself uses iBeacon in its own retail stores [see photo, above]. Customers walk into the store past an iBeacon, receive a notification to enable “in-store notifications” and if they agree, are then digitally greeted with a dashboard for that store’s location. The NFL used iBeacons at its recent “Super Bowl Boulevard” festivities along Broadway in Times Square. Later this year 20 major league baseball stadiums will feature iBeacon deployments.

But the same technology is capable of supporting more advanced services in the future, including mobile payments. Apple, long criticized for not making use of distance-challenged Near Field Communications (NFC) chips in its phones for a mobile wallet application, is now seen by some as using iBeacon as part of a Bluetooth-based alternative to NFC.

Mashable’s Lance Ulanoff gives a clear account of his recent iBeacon experience at Apple’s Grand Central Store in New York City.

Here’s how he describes the purchase he made there, using his iPhone and the EasyPay system: “We started by using the iPhone to scan the product barcode and then we had to enter our Apple ID, pretty much the way we would for any online Apple purchase [using the credit card data on file with one’s Apple account]. The one key difference was that this transaction ended with a digital receipt, one that we could show to a clerk if anyone stopped us on the way out.”

He draws an important conclusion: “With iBeacon and Bluetooth LE, Apple may have created a far more palatable and more passive way of paying digitally, especially since it relies on a payment method iOS customers already know.”

Vmware & Google together?


Just last week Vmware & Google announced a collaboration effort to offer hosted versions of Windows Apps on their “Chromebook” platform

The all-cloud, all-the-time approach to computing has won Google Chromebooks a number of fans in the enterprise, with access to Windows applications generally delivered through some form of desktop virtualization. A new partnership between VMware and Google will make this approach even cheaper, especially for smaller companies.

Today, the companies announced a deal to bring Windows applications to Google Chromebooks from the public cloud, by way of a network of certified VMware vCloud Service Provider Partners (VSPPs) and technology gained VMware’s acquisition of Desktone. For customers, the benefit is a cheaper way to use Chromebooks as a Windows replacement, without the cost of designing, building, or maintaining a VMware Horizon View DaaS architecture themselves.

“As the countdown to Windows XP end of life continues, deploying Chromebooks and taking advantage of a DaaS environment ensures that security vulnerabilities, application compatibility and migration budgets will be a thing of the past,” writes Google Director of Project Management for Chrome Rajen Sheth in the official announcement.

Inside Box: How the red-hot enterprise startup designs and builds its products

There’s some obvious synergy between Chromebooks and desktop virtualization — Chromebooks are essentially the thin client design applied to the public cloud concept, and desktop virtualization is a way to centrally manage and deploy applications without concern for what hardware they’re running on.

That said, it’s not clear how many companies will really benefit from this. Desktop virtualization itself is going through a period of rebirth: The rise of software-as-a-service should gradually make it irrelevant, as any enterprise looking for that kind of any-application-anywhere approach can turn to browser-based tools like Salesforce.com or Google Apps itself.

But some legacy Windows apps just can’t be replaced, or more often, there’s not enough of a business case to do so. Hosted DaaS mainly gives smaller companies who wouldn’t be able to afford to run their own virtual infrastructure a way to keep using those old Windows apps on new types of devices. No less than Amazon Web Services gave the market a shot in the arm recently, with the launch of WorkSpaces, a DaaS offering served up from its public cloud infrastructure.

Still, this seems like this is a corner case at best: Any business that’s forward-thinking enough to deploy Google Chromebooks is probably looking to move as much of their infrastructure to hosted cloud services as possible.

CTEX lights up South American region’s most advanced Cloud Platform


Curacao Technology Exchange (CTEX) has completed the migration and expansion of its advanced Hyper Cloud platform to its new Uptime Institute certified Tier-IV data center. CTEX finished the implementation of its next-generation computing platform over the weekend providing advanced Cloud solutions to customers in the Caribbean and Latin America. Although previously available to a select few customers, CTEX’s Cloud platform is now available to everyone. Various customers from around the region are now in production.

VCE, CISCO and EMC form part of CTEX’s Cloud platform, which offers computing, storage, and application services that can underpin heavy websites and mission critical applications. CTEX launched its Cloud platform a year ago, but controlled distribution until it expanded and migrated the platform to its Tier-IV data center. CTEX’s HyperCloud platform is aimed squarely at the business customer with a demand for ‘Always On’ availability. The company offers a 100% uptime of all critical data center components, which it achieves through the region’s newest and most advanced Uptime Institute certified Tier-IV datacenter located in Curaçao, just off the coast of South America.

CTEX’s manager of Cloud services, Vannessa Varies, notes that the company is “giving people the same services offered by data centers in the United States, Europe or elsewhere in the world. Customers have experienced zero downtime with CTEX’s Cloud platform in the past. The latest migration from a temporary data center and expansion to our Tier-IV data center, gives customers even greater security and more flexibility to configure complex computing solutions.

“To address today’s business challenges, global security, privacy threats and advantages offered by the Cloud, companies are now rethinking investments in their own computing platforms and data center facilities. With our unified HyperCloud platform we provide an array of options for reducing infrastructure complexity, operating costs, increased speed, and reliability. Our customers demand extreme security to ensure that their data and applications are protected at a high-end trusted facility. Leveraging our newly enhanced Cloud platform, companies and their IT departments can do more than ever before,” says Anthony de Lima, CTEX’s Chairman & CEO.

CTEX’s Tier-IV data centers are the newest and most expansive Uptime Institute certified Tier-IV facilities in Latin America and the Caribbean. The island of Curaçao offers many benefits including strict European-based privacy laws, connectivity through 5 submarine cables with two additional ones planned for the year, a unique location outside the traditional hurricane belt and major seismic active zones, a multi-lingual workforce, and fiscal benefits for international companies locating their computing infrastructure, data and applications at the company’s data center. CTEX’s is building out 4 massive 71,000 square feet facilities, the first of four entered operation this month.

Amulya InfoTech plans to explore the Indian IT market

After providing remote infrastructure management services and web solutions to clients abroad for over a decade, Coimbatore-headquartered Amulya Infotech India Pvt Ltd is looking to serve the Indian IT industry.

Its founder and Chief Executive Roopa Prabhakar said that Amulya is keen to explore the domestic market

‘The cloud services market is expected to reach $15 to $18 billion by 2020. Indian enterprises are on the look out for availing the benefits of cloud to save on costs, data security and accessibility. Our “actsupport” division specialises in extending technical support to web hosting companies and data centres, Blade server support services, network and server administration, back up solutions, storage management and virtualisation services, she said.

‘The company’s áctmedia’ division plans to develop secure mobile applications and extend its security solutions to enterprise websites in the domestic market,’ she said and cited a Springboard research estimate, which has predicted a 20 per cent increase in the use of enterprise mobile applications by 2015.

While disclosing her plans for the domestic market, Ms Prabhakar also hinted about Amulya foraying into the US by setting up an office there and establishing its presence in Germany as well..

The company is planning to double its headcount from 150 to 300 very soon. ‘Candidates were reluctant to join our company initially, but the situation has changed in the last couple of years. To keep attrition rates low (it is 6 – 7 per cent), we have increased the package. Today, the staff cost alone works out to 65 per cent of the cost and people are motivated,’ she told Business Line.

State of Cloud Computing Security in the UAE

Security & Compliance-1

This study provides an overview of the cloud security market in the UAE. Besides the cloud security market, it also assesses the cloud market in general. The study looks at the main challenges, drivers and restraints for this market. It estimates market size and growth rate and provides overviews of key market participants. The key strength of the study is the end-user survey conducted to portray the market stance and the demand for cloud security and overall cloud services.

Key Questions This Study Will Answer

  • What is the state of cloud computing and the cloud security market in the UAE?
  • What are the cloud security service end user’s needs and demands in the UAE?
  • What percentage of IT budget was devoted to cloud computing in the UAE? How it will change in the next 3 years? What is the outlook for the cloud market?
  • Which companies are leading cloud service providers in the UAE? What type of companies dominate the UAE market?
  • What are the key drivers and restraints in adopting cloud security services?
  • Which workloads are likely to be deployed in the cloud environment in the short run, and which should be targeted in the long run?


  • The UAE cloud security market is in its infancy, with a growth rate of XX percent in 2012.
  • The market is expected to earn a revenue of $ XX million in 2019.
  • The uptake of cloud computing in the UAE is expected to be the main driver for growth in the cloud security market.
  • The cloud security market is expected to prosper once large enterprises start tapping into the cloud market.
  • A number of cloud security contracts are expected to be signed as a continuity to previous partnerships between cloud vendors, local businesses, and cloud security vendors.

Key Topics Covered:
1. Executive Summary
2. Research Aim and Objectives
3. Research Scope and Definition
4. Market Overview
5. Regulatory Framework
6. External Challenges: Drivers and Restraints
7. Cloud Security Market
8. Cloud Security End Users
9. Cloud Security Vendor Landscape
10. Respondent Profiles
11. Perception of Cloud Computing
12. Budgeting
13. Cloud Adoption Trends
14. Security
15. Threats in the Cloud Environment
16. Cloud Vendor Selection
17. Cloud Vendor Brand Awareness
18. Conclusions
19. Competitor Profiles of Selected Industry Participants
20. The Last Word
21. Appendix

You can find out more here

According to a recent Verizon PCI report, only 9.4% of organizations investigated after a data breach were compliant with PCI Requirement 10


“As we enter the tenth year of PCI DSS, there has been important progress. With version 3.0, PCI DSS is more mature than ever, and covers a broad base of technologies and processes such as encryption, access control, and vulnerability scanning to offer a sound baseline of security. The range of supporting standards, roadmaps, guidance, and methodologies is expanding. And our research suggests that organizations are complying at a higher rate than in previous years.

“After an uncertain start, many organizations now feel comfortable with and better understand what the DSS is about, and accept that complying with it is not only a necessary part of accepting card payments, but also a solid baseline of controls for protecting cardholder data.”

Download Report Here

$655 Million Memory glitch hits Cisco’s Q2 bottom-line

Earnings impacted by faulty DRAM chips; Micron’s name pops up as the supplier

Cisco’s earnings for the second quarter of fiscal 2014 were impacted by a $655 million charge to fix faulty memory components in a range of products. The buggy memory products were manunfactured by a single supplier between 2005 and 2010, Cisco said during its Q2 earnings conference call this week.

Micron Technology’s name surfaced this week as the alleged supplier of the faulty DRAM chips.

+MORE ON NETWORK WORLD: Cisco’s Q1 issues carry over into Q2+

The components have been determined to have the potential to fail due to a design or manufacturing defect, Cisco CFO Frank Calderoni said during the call.

These are widely used across the industry and are included in the number of Cisco’s products. Although the majority of these products are beyond Cisco’s warranty terms and the failure rates are low, Cisco is proactively working with customers on mitigation.

Cisco believes the impact of the memory component fix and its expenses will not be a recurring event. Said Calderoni:

Cisco does not believe it is reflective of ongoing business and operating results.

But CEO John Chambers told the Wall Street Journal that others in the networking industry may also be affected by it. Micron said it is expecting other incidents to possibly arise.

By Jim Duffy

Ericsson and Ciena join forces on IP and optical networking

Ericsson and Ciena have signed a global agreement to develop joint transport solutions for converged IP and optical networks that also use SDN (software-defined networking) features.

The agreement is effective immediately, and product integration efforts are under way, the companies said on Friday.

Ericsson and Ciena see the deal as win-win endeavor; Ericsson will benefit from Ciena’s optical products based on technologies such as WDM (wavelength-division multiplexing) and used in the backbone of carrier networks. Ericsson brings its portfolio of IP routers and Global Services organization to the table.

If the two companies are able to successfully integrate their products, they should be able to better compete with Cisco Systems and its more complete product line-up.

Collaboration will be critical for carriers to be able to build more flexible networks that take advantage of SDN features and dynamically handle changing demands from applications and services with less manual work. The aim is to lower total cost of ownership and allow carriers to roll out new services faster, according to a joint statement. A

As part of this agreement, Ericsson will sell Ciena’s Converged Packet Optical portfolio, including the 6500 and 5400 families.

Ericsson and Ciena are both members of OpenDaylight, a project that was started in April last year to develop an open-source framework for SDN and the related Network Functions Virtualization (NFV) concept, which will make it possible for carriers to virtualize their networks. The framework is called Hydrogen, and the first version was released earlier this month.

Anyone can download the Hydrogen code for free, but what real-world impact it will have remains to be seen. At the time of the release, Ericsson said it plans to use OpenDaylight code as part of its larger SDN offering, but didn’t offer any details. The Swedish company has also announced a lab at its San Jose facility for testing OpenDaylight implementations.

By Mikael Rickna$?s, IDG News Service

Griffin PowerDock 5 Charging Station


PowerDock 5 is a cool new space-saving charging station for countertop or desktop, providing safe charging and storage for up to 5 devices.

It doesn’t take up much room, about the same as a single iPad. But it gives you a place to store and charge 5 iOS devices at one time, from a single power source. We think it’s the perfect charging solution for small offices or work groups, or a family full of iPhone users.

PowerDock 5 gives each tablet its own charging port and its own frosted backrest, and each charging bay is roomy enough to accommodate your iPhone, iPod or iPad in their cases.

PowerDock 5 is optimized for iOS devices (iPad, iPad mini, iPhone, iPod touch, iPod nano), although our testing indicates that PowerDock’s 5V (2.1 amp) charging circuitry will also charge most Kindle and Android devices. Many smartphones and tablets have unique charging requirements, so PowerDock may not charge your device as fast as you’re used to. Make sure you check your device’s manual for any special requirements.

Introducing Doxie Flip, the inventive new mobile, battery-powered flatbed scanner perfectly designed for photos, sketches, keepsakes, and pocket notebooks.


A new kind of scanner for capturing your creativity and history.

Doxie Flip is a new kind of flatbed scanner that goes everywhere. With an intelligent design that flips, it’s perfectly designed for photos, albums, sketches, pocket notebooks, even small objects like coins and stamps… so you can capture your creativity and history everywhere.

A new way to capture everything

Doxie Flip is lightweight and portable – about the size of a book. It’s cordless, with batteries and SD flash memory so you can take it with you and scan anywhere, no computer required. It’s perfect for research, road trips, or even just your couch.

Flip, see through, and scan everything

Doxie Flip has a removable lid and transparent scanning window, so you scan notebooks, old photo albums, even interesting surfaces – just flip Doxie over and place it safely and directly on your originals. Doxie’s window lets you see what you’re scanning as it happens, so every scan is perfect.

Perfect for photos and memories

Doxie scans everything you never thought possible. With a 4×6″ (A6) scanning surface, you can scan most any object, from old photo albums to newspaper clippings, coins, stamps, clippings from books, and everything else. Your originals always stay protected. For big originals, use AutoStitch to merge scans together seamlessly.

Capture notebooks and sketches

Doxie is perfectly designed for capturing pocket notebooks – scan sketches, notes, and artwork with amazing clarity. It’s the perfect companion for Field Notes™ and Moleskine® brand pocket notebooks.

Amazing quality. Beautifully simple. Perfectly sized.

Doxie Flip scans originals up to 4×6” (A6), delivering brilliant image quality in a size you can carry with you. A removable lid makes scanning anything and everything easy, at up to 600 dpi. And for larger originals, Doxie’s AutoStitch feature seamlessly merges multiple scans together into one big image. So you scan anything, even large prints.

WeMo Insight Switch – Remotely Turn Electronics On/Off, Monitor them from anywhere via Android or iOS


The Wi-Fi enabled WeMo Insight Switch connects your home appliances and electronic devices to your Wi-Fi network, allowing you to turn devices on or off, program customized notifications, and change device status–from anywhere. WeMo Insight Switch can monitor your electronics and will send information about the device’s energy usage directly to your smartphone or tablet. Perfect to pair with space heaters, wall A/C units, TVs, washers, dryers, fans, lights and more.


Download the free WeMo App to as many smartphones and tablets as you like to create rules and custom schedules. The WeMo App can be as flexible as you need and allows you to create rules that are easy to set up and can easily be changed. You can even receive notifications that are personalized to meet your needs.

Get notified when your laundry cycle is finished so that you avoid wrinkled clothes, know when your child has exceeded their daily TV limit and turn it off, or find out if you left the space heater on accidentally


The WeMo Insight Switch can help keep your home energy bills low by allowing you to set schedules, monitor energy usage on electronics, and find out which devices are used most often. WeMo can notify you instantly if home electronics have been left on and you can choose to either leave them on or turn them off from anywhere. Find out if you left the A/C window unit running and turn it off, find out if the kids are playing video games instead of doing homework, or schedule a space heater to turn on five minutes before you walk through the door.

With WeMo, you can set schedules, create rules for electronics to respond at specific times or to sunset/sunrise, and get notifications for any device connected to a WeMo Insight Switch.


To install WeMo Insight Switch, simply plug the Switch into an outlet in your home and then plug an electronic device or appliance into the switch. Download the free WeMo App from either the Google Play Store or the Apple App store to any smart device. WeMo Insight Switch keeps you connected to your electronics anywhere you are–over Wi-Fi, 3G, or 4G networks.


WeMo’s modular design allows you to monitor as much or as little of your home as you like. You can easily add additional WeMo Insight Switches anytime to any room to program multiple devices or appliances. Plus, WeMo Insight Switch has a slim design that won’t block your outlet.


WeMo Insight Switch is compatible with Apple (iOS 6 and higher) and Android (4.0 and higher devices) and your existing Wi-Fi router. It is backed by a one-year limited warranty.


WeMo is a family of simple, ingenious products that make life easier, simpler, better. WeMo uses your Wi-Fi® network and mobile internet to control your home electronics right from your smartphone. WeMo also works with IFTTT, connecting your home electronics to a whole world of online apps.

Federal smartphone kill-switch legislation proposed


The bill would mandate a kill switch and remote-wipe capabilities on cellphones

Pressure on the cellphone industry to introduce technology that could disable stolen smartphones has intensified with the introduction of proposed federal legislation that would mandate such a system.

Senate bill 2032, “The Smartphone Prevention Act,” was introduced to the U.S. Senate Wednesday by Amy Klobuchar, a Minnesota Democrat, and covers smartphones, tablets and any personal electronic device on which commercial mobile data service is provided.

It requires a function that allows the subscriber to remotely remove personal data stored on such devices and to render them inoperable on the networks of any mobile carrier globally. The function should also be resistant to the device getting reactivated by another carrier, reprogrammed or reset unless a passcode or similar authorization is provided by the subscriber.

The bill specifies that the remote wipe and “kill-switch” function “may only be used by the account holder” and will apply to all such devices manufactured or imported in the U.S. from Jan. 1, 2015. There is an exemption for “low-cost, voice-only” phones that have limited data functionality.

It also specifies that carriers “may not charge the account holder any fee for making the function … available.”

The penalty for breaking the rule, which will be included as an amendment to the Communications Act of 1947, isn’t specified in the bill and will be determined by the FCC.

The co-sponsors are Democrats Barbara Mikulski of Maryland, Richard Blumenthal of Connecticut and Mazie Hirono of Hawaii.

The proposal follows the introduction last Friday of a bill in the California state senate that would mandate a “kill switch” starting in January 2015. The California bill has the potential to usher in kill-switch technology nationwide because carriers might not bother with custom phones just for California, but federal legislation would give it the force of law across the U.S.

Theft of smartphones is becoming an increasing problem in U.S. cities and the crimes often involve physical violence or intimidation with guns or knives.

In San Francisco, two-thirds of street theft involves a smartphone or tablet, and the number is even higher in nearby Oakland. It also represents a majority of street robberies in New York and is rising in Los Angeles.

In some cases, victims have been killed for their phones.

In response to calls last year by law-enforcement officials to do more to combat the crimes, most cellphone carriers have aligned themselves behind the CTIA, the industry’s powerful lobbying group. The CTIA is opposing any legislation that would introduce such technology.

An outlier is Verizon, which says that while it thinks legislation is unnecessary, it is supporting the group behind the California bill.

Some phone makers have been a little more proactive.

Apple in particular has been praised for the introduction of its activation lock feature in iOS7. The function would satisfy the requirements of the proposed California law with one exception: Phones will have to come with the function enabled by default so consumers have to make a conscious choice to switch it off. Currently, it comes as disabled by default.

Samsung has also added features to some of its phones that support the Lojack software, but the service requires an ongoing subscription.

By Martyn Williams | IDG News Service

Meet AOSP, the other Android, while you still can



No doubt you’ve seen the stat showing how much more popular Android smartphones are than iPhones in much of the world. But many of those Android smartphones aren’t the Android you’re thinking of — the kind you’d get from Samsung, HTC, or Motorola. That’s because there’s more than one Android. In fact, some analysts believe that about half of the Android devices in the world aren’t ones you’d consider to be Android.

The other Android is called Android Open Source Platform (AOSP), and it’s the truly open source part of Android, used as the basis of smartphones and tablets throughout the world.

You can also think of AOSP as akin to DOS: the embedded core OS in Windows before Windows NT came along. In that thinking, the Windows GUI is analogous to Google Mobile Services (GMS), the set of services that runs on top of AOSP to deliver the complete Android experience. GMS is as proprietary as iOS or Windows Phone. Google doesn’t charge money for it, but it comes with a lot of requirements that give Google a lot of control over Android devices. It’s also part of all those Google services, from the Google Play app store to the Google Maps APIs that many Android devices rely on to provide their Androidness.

If you live in North America, Europe, Japan, Australia, or South Korea and other rich Asian nations, chances are you’re using Android devices based on both the AOSP core and the GMS services that together represent the Google Android experience. Sure, manufacturers can add their own services and APIs on top of these two, but once you scratch those skins’ surface, you’re back to the Android experience.

In the rest of the world, chances are greater that your Android device is running AOSP. Thus, it doesn’t provide much of the Google experience.

AOSP is used by most of the really cheap Android devices, such as those in China and India, as well as large parts of the rest of Asia, Africa, and Latin America. AOSP is cheaper to use because its services are so basic that it can run on inexpensive hardware better suited for the small incomes of poor countries. It also ironically is a good fit in countries like China where the government doesn’t want a foreign company to have that much reach into citizens’ data and communications — where the government wants to keep that pile of riches for itself.

AOSP is also what Amazon.com’s Kindle Fire tablet OS is based on and why it offers none of the Google services you’d expect from Android. Amazon has to replicate any of the GMS services such as an app store that it wants to offer — and Amazon is one of the few companies that can do that.

In China, Xiaomi is in a similar position to make such technology investments, and that’s why it hired a key Google exec, Hugo Barra, last year. If Microsoft’s soon-to-be-subsidiary Nokia announces an Android device next week as rumored, it may also be an AOSP-based device, not a “real” Android device — Microsoft also has the resources to replicate key GMS features using alternative technologies. (Why it would do so is another question!)

AOSP is a bare-bones OS that Google has been updating less and less, and at some point it may not be a viable Android OS any longer. In fact, AOSP’s chief, Jean-Baptiste Quéru, quit in disgust at Google’s neglect last year. A neglect-inflicted death is certainly what competing open source and Web-based platforms such as Firefox OS, Ubuntu Touch, and Sailfish are counting on.

You can see the progression of Google’s shift from AOSP to GMS in the various Android OS versions. The early Android versions, like 1.6 “Donut” were mostly AOSP, whereas the 4.4 “KitKat” version is predominantly GMS. That may explain why the most recent Android version’s adoption by device makers has been slow — it puts them in the position of making me-too Android devices (or at least me-too-er ones) and be more and more locked into Google’s proprietary aspects. The increasing proprietariness of Android via GMS also explains why Samsung continues to dabble with the often-promised but as-yet-undelivered Tizen.

All this adds up to a strange brew. Half of the Android devices out there run on AOSP, which seems to be on its way to abandonment. The other half are less able to differentiate themselves from each other.

For those into the mobile horse race, the non-AOSP-only Android smartphones still outsell the iPhone; the iPad still rules in tablets even if you include them. But once the industry recognizes that AOSP isn’t really Android as DOS really isn’t Windows, the mobile horse race will look to be a lot closer, a near tie between iPhones and “high end” Android phones. In the tablet arena, the iPad will still be the leader, and Android remain a distant second place, followed by the AOSP-based Kindle Fire, whose sales have fallen in the last year and no longer threaten “real” Android tablets from Samsung and others, whose sales are growing. When all is said and done, we’ll experience a psychic change in that horse race.

For those who don’t really care about market horse races, the bigger implication is that much of the developing world is using a platform whose longevity is uncertain, and those regions may be ripe for a big shift in the mobile platforms it uses. The developing world has already largely abandoned BlackBerry and Nokia’s Series 40 OSes (and is doing the same to Nokia’s Asha follow-on) in favor of AOSP Android. Will it step up to AOSP+GMS Android or to iOS as it becomes richer? Will it shift to another simple mobile OS? Or will it remain a complex stew of OSes even as the developed world moves further into becoming a Samsung-Apple duopoly?

It’ll be interesting to see how this all goes. But the next time someone talks about “Android,” you’ll know it’s not necessarily what he or she thinks it means. The Android you see may not be the Android you’re looking for — and a big chunk of the Android world may not be around much longer.

By | InfoWorld

Cloud computing timeline illustrates cloud’s past, predicts its future

To understand where you’re going, it always helps to look where you’ve come from. Cloud computing has come a long way since its inception, when Amazon was still only known for selling books and movies, Myspace was considered “cloud,” and experts argued on the true definition of cloud computing — though the latter can still be true. By delving into this cloud computing timeline, IT pros can learn from past ups and downs and have a clearer idea of where the cloud market is heading — though, of course, there are always unforeseen hiccups that can alter its course, such as the PRISM scandal. This cloud retrospective aims to give a high-level view of cloud’s history and question what that means beyond 2014.

cloud computing timeline


Cloud computing timeline references

The market disagrees in defining cloud: VMware jumps onto cloud bandwagon; Solution providers play advisory role in cloud computing model; Minasi says cloud computing is rife with flaws; Tech Watch: Cloud computing services

By Caitlin White

Can Dell Software rescue its spiraling hardware sales?

As competitive pressures relentlessly squeeze profit margins out of Dell‘s core hardware business, the company realizes its PC and server sales won’t guarantee a seat at the IT table for long.

Now, Dell Inc. must become a soup-to-nuts technology supplier with a strong software and services portfolio.

The marketing and sales tail wags the Dell dog. The software team can only do so much

– Former Dell employee



The transformation, however, has been mired in the mud of a bloody, drawn out transition to being a private company, difficulties in integrating its software acquisitions and creating a cohesive narrative for users. These challenges have all been compounded by the resistance from internal factions who want to stay the hardware course, sources close to the company said.

Some analysts believe the long-term survival of the company is at stake if Dell can’t make this transformation.

“Both HP and Dell are hammering hard on [the issue], and Dell for its part made interesting acquisitions of software companies,” said Bob O’Donnell, founder and chief analyst of TECHnalysis Research LLC, Foster City, Calif. “Both those guys are big services companies and [that hasn’t] panned out. The reality is, the writing is on the wall for these guys. It is a challenge for them to survive on hardware only.”

Dell’s metamorphosis leads to layoffs, acquisitions

This transformation started boldly enough with the hiring of long-time IBM executive John Swainson in 2012, who most recently salvaged CA Technologies from sinking into software anonymity, to be president of Dell’s new software division. The company quickly added a handful of other proven software executives from companies including Symantec.

With Swainson’s high-powered team, coupled with a raft of acquisitions, Dell appeared ready to make the same transformation to a software and services-oriented company that IBM pulled off a couple of decades before.

But there is evidence that Dell’s hardware addiction and transition to a private company are having deleterious effects. The company recently laid off several thousand workers associated with a voluntary severance package.

The layoffs are part of its plan to optimize its business and streamline operations, a company spokesperson said. The company added that it is hiring in strategic areas of the business.

Dell also called reports of up to 15,000 workers being laid off “widely inaccurate,” the spokesperson said, declining to provide an accurate number. The company employs 109,000 workers worldwide.

Similarly, Hewlett-Packard (HP) will lay off 34,000 people by the end of 2014 as part of its restructuring plan. And just last week, reports surfaced that IBM, another major vendor whose server hardware business is in decline, may lay off as many as 15,000 workers sometime in the first quarter.

Meanwhile, Dell has been on a buying spree, making acquisitions to fill in its product gaps related to security, cloud, Software as a Service and systems management. Besides Quest Software, the company has snatched up KACE, Sonic Wall, Wyse Technology, Inc., Enstratius, Boomi, Credent and AppAssure, among others.

In an in-depth interview last December with SearchEnterpriseDesktop.com, Swainson said his two priorities are to integrate all the acquisitions and ensure all products are delivered through the right channel. He made it clear that software is the catalyst that will provide Dell users with more value and give Dell higher profit margins.

Dell‘s incredible asset is the SMB distribution channel,” said Brenon Daly, research director of mergers and acquisitions for New York-based 451 Research Group. He added that growth in the software market has slowed, making it difficult for Dell and its competitors.

Dell’s hardware culture hinders transition

Despite the acquisitions of key personnel and software companies, some see the root of Dell‘s difficulties as something more fundamental.

Dell doesn’t understand how to sell enterprise-level software,” said a former Dell employee who requested anonymity. “They brought on the right people who understand software and services. But they quickly found that every time they created opportunities using an integrated software approach, they were forced back to selling individual brands like KACE or Wyse.”

On the other hand, 451’s Daly thinks Dell‘s hardware can still pull through the software sales and provide opportunities to the company’s bottom line.

Money talks

In its fiscal year 2013 10-K filing, Dell spent $5 billion in acquisitions and for the six months of its last reported fiscal year 2014, the software group reported revenue of $605 million and an operating income loss of $147 million. Dell has attributed the loss to planned research and development and additional sales investments required to take the company forward.

“If you look at the acquisitions that have done well, like KACE, it’s an appliance with an add-on subscription-based software,” he said. “That is valuable because it’s predictable and really sticky. To the extent they can match software and services to a hardware sale is how [Dell] can execute.”

Much of the internal resistance comes from long-time marketing and sales executives fearful of losing core hardware customers. After Swainson and his team presented their vision to the company in late 2012, many of the product groups retreated from the new strategy and reverted to promoting individual brands, one source said.

“The marketing and sales tail wags the Dell dog,” the former employee said. “The software team can only do so much. To make this business work they need the entire company to shift.”

Dell’s internal struggles reflect difficulties many companies encounter as they attempt to change long-ingrained corporate cultures.

“The challenges I’ve heard about relate to working through the massive reorganizations last year, and integrating all the groups together and making that work,” said Steve Brasen, managing research director for Enterprise Management Associates, based in Boulder, Colo. HP and IBM also have similar issues of segmented organizations that need to work together, he added.

“My impression is there has been a natural resistance of change,” Brasen said. “It is a positive move from what they’re doing [but] people are frustrated. We are changing the way IT works and there is going to be pain.”

But Dell‘s focus on software and services hardly means the company has forgotten about hardware. The company remains number three in PCs and number two in servers. Just last fall the company rolled out new notebooks and tablets and, most recently, has been showing off a 64-bit ARM-based server.

However, increasing pricing pressure has forced competitors such as IBM to sell its low-end x86 servers to Lenovo, and puts the heat on Dell.

“Lenovo will pressure Dell‘s low-cost value proposition, along with its ability to drive x86 server market share — which Dell is largely dependent upon for its transition to an end-to-end solution provider,” said Krista Macomber, analyst, computing and storage practice, Technology Business Research Inc., based in Hampton, N.H.

In addition, Lenovo now has IBM’s converged products such as PureFlex coupled with IBM storage, services and support. That will pressure Dell‘s ability to expand in growing cloud and big data-focused workloads, Macomber said.

Dell did not provide comment about its transformation challenges.

Dell’s collaboration works

Other Dell employees differ in their view of the progress the company has made in its transformation journey. They point to Swainson’s mandate of taking a collaborative approach to all acquisitions, which resulted in a cross-functional team spanning all software, services and hardware organizations.

“Before we existed, we’d go to customer briefings with four different definitions of mobility, and technically they didn’t integrate,” said Neal Foster, executive director Dell Software group during an interview at Dell World in December. “There is a virtual team and folks in each business unit are contributing,” he said.

Since this virtual team has started meeting on a biweekly basis, there is a more coordinated approach to the messaging and engineering required to review the products and their roadmaps, Foster said at the time.

For example, the security and management groups have worked together to quickly bring the mobile software tools Dell acquired to market in its new enterprise mobility management suite, said Chris Silva, research director for EMM at Gartner Inc., Stamford, Conn.

Indeed, Dell Software’s cross-functional team approach so far seems to be working for IT pros.

“I put a few ideas on the bulletin board in the user voice and KACE is receptive and looks into those,” said Jon Scott, desktop administrator for Oregon-based Salem Health, in a recent conversation. He uses the latest version of the KACE 2000.

By Diana Hwang and Ed Scannell

Enterprise Spending On Cloud To Triple By 2017


Corporate spending on cloud computing is expected to triple from 2011 to 2017, according to IHS Technology (IHS) report, which predicts that enterprise spending on cloud computing will amount to nearly $235.1 billion, triple the $78.2 billion spent in 2011.

This year, global business spending for infrastructure and services related to the cloud will reach an estimated $174.2 billion, up 20% from the amount spent in 2013, says the study.

“Cloud services, applications, security and data analytics will account for an ever-growing portion of total information technology spending by enterprises, valued at about $2 trillion at present,” IHS analyst Jagdish Rebello says in a statement. He adds that the robust growth will come as an increasing number of large and small enterprises move more of their applications to the cloud, while also looking at data analytics to drive new insights into consumer behavior.

The HIS report notes that among the large tech companies battling for market share in the cloud are Amazon.com, Google and Microsoft. Other companies like Dropbox and Carbonite that  specialize in cloud-based data storage services and those that sell online software application services such as Salesforce.com and Workday will dominate the cloud landscape.

Other research reports in the recent past have showed an optimism in cloud adoption and sales. Gartner for example has forecast a $3.8 trillion growth in 2014, a slight increase from last year’s  spending of $3.7 trillion, but the market will shoot up thereafter till 2017 owing to increasing spending on devices (ultramobiles, mobile phones and tablets) as well as remote managed services.

The cloud is no longer an option for businesses, but it is a must , says Founder of Tribridge and cloud specialist Tony DiBenedetto. Most businesses already work in the cloud, or store data there, or deploy applications from the cloud, he says, and as a result the cloud will be the major driver of IT spending and decision making for the foreseeable future.

Red Hat CEO Jim Whitehurst mentioned in his blog that 2014 is going to be a defining year for for Cloud industry as cloud architectures will go from experimentation to deployment, where big data goes from promise to production, and when we get our first glimpse at how these innovations could potentially change our world.

“For the last couple of years, we’ve been talking about the cloud, but realistically, it’s only been about customers starting to toy around with it. You’re finally seeing these things go into production,” says Whitehurst with CIOs reaping the ‘actual’ benefits of cloud computing in the next couple of years.

6 weird places you’ll find Linux

Linux is everywhere, from your desktops and servers to your phones and televisions. Let’s take a look at some of the stranger places you’ll find Linux installed in this weird world of ours. I should note that these are all very real. You won’t find “Linux installed on a potato” in this list.


This slow cooker

We start our journey into Linux-powered weirdness with this slow cooker from Belkin. That’s right. It’s a crock-pot. “But, Bryan!” you say, “surely it must be more than just a crock-pot.” Nope. It’s just a crock-pot, and it’s powered by Linux. Just like your great grandmother dreamed would one day happen. The future is now.


This creepy-but-awesome robot

How many times have you said to yourself, “I sure could use a robot the size of a pre-teen Hobbit with vacant black eyes that can gaze into my soul and kick soccer balls like a champ”? Well, you are in luck, because the “NimbRo-OP Humanoid TeenSize Open Platform” fills that niche. And it runs Linux.


Underwater tsunami sensors

Weird? Yes, but also fairly remarkable. These underwater sensors, with modem, run Linux. The modems use acoustics to transmit data under the ocean. The practical applications could save lives, and simultaneously provide internet access to both Sealab and seaQuest DSV. That last bit is obviously not true. But the rest is. How amazing is that?


Computer engineer Barbie

She can do so much more than drive a pink jeep and date a plastic, anatomically incorrect man. Barbie, it turns out, is also a software developer. Now, take a look at how Barbie has chosen to decorate her work cubical. Note that Tux the Penguin (or a close facsimile) takes the place of honor on her shelf. That’s right. Barbie is a Linux nerd.


At 30,000 feet

Have you taken a flight with one of those in-flight entertainment systems? Well, odds are it was running Linux. So if you’re hurtling through the air in an aluminum tube miles and miles above the earth… take solace. The re-run of Big Bang Theory that you’re watching is brought to you by the power of Open Source.