700 West Pete Rose Way | Cincinnati, Ohio 45203

How business is putting the iPad to work
When Steve Jobs unveiled the iPad in January, he pitched it mostly as a consumer device--a relaxation tool people would use to read books, play games, watch video and peruse family photos. But Michael Kanzleiter and his colleagues at Mercedes-Benz Financial saw something else: A better way to sell cars.

Traditionally, car salesmen trying to close a deal have had to drag customers off the showroom floor and back to the office to fill out loan applications and other paperwork. This summer, though, iPads were distributed to forty Mercedes-Benz dealerships throughout the country, letting dealers complete forms right next to the new cars--and, perhaps, minimizing chances that customers would get cold feet during the final moments of the vehicle-buying process.

"The car is where the emotions come up, and the customer is excited about it," Kanzleiter, senior marketing manager for Mercedes-Benz Financial in Farmington Hills, Michigan said of the pilot program. "The more you can have the process close to the car, the better it is."

Mercedes-Benz isn't alone in bringing the iPad into its business processes: during Apple's third-quarter earnings report, COO Tim Cook said that 50% of Fortune 100 companies are now using the device. But it's also clear that small businesses and entrepreneurs are finding innovative ways of using the Apple tablet to make their businesses and professional lives run more smoothly and profitably.

Examples from the field
Paperless sales C. Lee Smith, president and CEO of Sales Development Services in Westerville, Ohio, sends his sales staff into the field with AdMall, a Web-based diagnostic tool that lets them survey clients about their local advertising needs and helps them create a business plan on the fly.

"The beauty of the iPad is that you skip a step in the process--you don't need paper, you don't need to go back and key it in," he said. "You can just tap and get the correct answers."

Furniture delivery Cleveland-based Arhaus Furniture will put iPads in the hands of its deliverymen in 14 states by November--and expects to save $100,000 a year in time and paper savings for its invoicing, signature-capture and credit-approval processes. Furniture deliverers will also be able to use the GoogleMaps app to ensure they're delivering furniture to the correct location.

"We looked at the traditional (electronic invoice) systems that were out there. For big burly guys who deliver furniture, they were kind of tiny. The iPad was just right," said John Roddy, Arhaus' Senior Vice President of Logistics. "They're excited about it, and a little hesitant. When they start looking at less paper carried around, the ability to help the customer in the home, it makes it exciting."

Real-time stats Charlie Wood, a founder of Spanning Sync in Austin, Texas, has his personal iPad propped up next to his office desktop computer. It's open all day to a password-protected Web page that updates constantly with real-time information about sales, server use, and other critical information for the company, which sells apps that sync Google calendars and address books with a user's hard drive information.

"We're sort of addicted to numbers. We're addicted to metrics," Wood said. "What we really wanted was something ambient--instead of looking it up, it would be a thing on the wall. You could glance over and know what the numbers are without looking them up."

All-purpose assistant In the UK, James Burland of Anglebury Press, a small print company on the south coast of England, uses the iPad to do everything from entering job details on GoogleDoc forms, to taking phone messages, to showing artwork to clients before jobs go to press.

"It's certainly true that we could do everything on a laptop or iPhone, but it wouldn't be quite as elegant," said Burland, who also writes the iPad Creative blog. "The iPad gets carried around quite a bit from office to office and even around the factory."

Technical manuals to go In Warsaw, Poland, broadcast engineer Wojtek Pietrusiewicz uses an iPad instead of carrying a library of technical manuals with him as he services equipment.

"Whenever I do maintenance, software/hardware upgrades or full installations, I require many manuals in PDF form. Since there is rarely space for a laptop in the central apparatus rooms, I decided to use my private iPad instead," he said.

PDF review tool Back in the United States, Wharton School Publishing editor Stephen J. Kobrin says the iPad has proven more useful than the Amazon Kindle DX at displaying PDFs--allowing him to reduce paper clutter--and more versatile than Microsoft's line of tablets.

"It's primarily useful for reading documents and keeping everything in one place," Kobrin said. "It's also very good for keeping to-do lists."

Size matters
One common thread becomes clear when talking to people who use the iPad at work. Laptop computers are often too bulky to do the jobs described, while iPhones and other mobile devices are too small.

"The display on the iPhone is too small for media salespeople to share information with their advertiser," Sales Development Services' Smith said. "A laptop is too cumbersome, too heavy, and takes too long to boot up."

Back at Mercedes-Benz Financial, the pilot program in dealerships will be reviewed at the end of the summer. If deemed a success, the company could distribute iPads to the remainder of Mercedes-Benz USA's 350 dealerships.

At this point, that seems a likely outcome.

"We've received really good dealer feedback on the program. They're very excited about it, they use it a lot," Mercedes-Benz Financial's Kanzleiter said. "I think it really offers a whole new range of opportunities."
Complacency over VPN security and management unacceptable
Virtual private networks, whether SSL or IPsec VPNs, have been IT's ultimate set-and-forget technology. Once installed and configured, these remote access technologies are often left on auto-pilot providing secure, isolated access to backend applications, files and other resources.

Mobile devices such as BlackBerries, iPhones and Droid smartphones, however, are changing the remote access dynamic. Companies will soon have to take a second look at remote link-ups over VPNs, especially as users introduce new personal devices to the network and demand access to apps from a variety of operating systems and platforms. In turn, it's up to security operations to provide secure VPN connections for these myriad devices.

As a result, tried-and-true VPN connections and configurations aren't good enough. IT operations, complacent about their VPNs, need to take another look, according to research to be released this week at the Black Hat Briefings by NCP Engineering Inc. Attackers are already leveraging insecure VPN connections to access critical data inside the enterprise. The attacks on TJX, Heartland Payment Systems and even this year's attacks on Google, Adobe and more than 30 other large technology companies, defense contractors and big companies, were carried out remotely with legitimate access over VPN connections.

"The set-and-forget approach is flawed," said Martin Hack, executive vice president, NCP. "If you have VPN clients out there, you have to make sure they are synched to management systems and are getting ongoing policy, configuration and management changes. You need strong proactive remote access management to prevent these issues because otherwise, these connections become the first weapon for an attacker."

NCP's research indicates that complacency over VPNs is one of the primary drivers leading to VPNs becoming an attractive attack vector. Not only configuration issues, but out-of-date software is leading to severe vulnerabilities that could lead an attacker straight into an organization with legitimate access. For example, corrupted route targets in a VPN implementation could redirect traffic to an attack site. In SSL VPNs, organizations often overlook Same Origin Restriction. Failing to turn this on could enable attackers to carry out phishing attacks by routing legitimate traffic away from its intended destination and onto a phishing site.

"Attackers could hijack sessions or install keyloggers without the user's knowledge. They would think they're in a secure environment," Hack said. "If the Same Origin Restriction is turned on and configured properly, only trusted resources can connect to an SSL VPN. If not, anyone can interject themselves onto a website."

Password problems can also arise over improperly managed VPNs, Hack said. Many VPNs allow you to cache password log information, and in plain text. Attackers with access to the cache may read them in the registry or in memory. Admins must encrypt caches, if they're allowed.

The NCP research also points out potential denial-of-service scenarios where malformed packets moving over a VPN connection could cause a buffer overflow that could bring down a network. Hack said these types of messages can elude and intrusion detection system and cause a system to crash or indefinitely reboot.

"The requirements for remote access have changed; mobility has changed a lot of that," Hack said. "There's a big need to bring these problems out into the open and get rid of complacency about VPN management. Mobility has changed things and it's a big issue because companies are getting hammered and breached."
Adobe Joins Microsoft's Patch-reporting Program
Adobe and Microsoft are now working together to give security companies a direct line into their bug-fixing efforts.

By year's end, Adobe will start using the Microsoft Active Protections Program (MAPP) to share details on its latest patches, according to Brad Arkin, Adobe's director of product security and privacy. "The MAPP program is the gold standard for how the software vendors should be sharing information about product vulnerabilities prior to shipping security updates," he said.

Adobe initially wanted to reproduce MAPP, but soon realized that it would take a lot of work to build a program similar to Microsoft's, which was piloted two years ago. Arkin's team began discussions with Microsoft, at first in hopes of picking up some tips. "Eventually, together, we came to the conclusion that it would be a lot more fun to work together on this rather than Microsoft helping us to reinvent the wheel," he said.

Typically, whenever a major patch is released, hackers quickly begin to analyze the patch to see what flaws were fixed. They then rush to work out attacks that would exploit the vulnerability on unpatched products.

Adobe has been hit hard in the past two years by hackers who have found bug after bug in the company's products. This often means hard work for security companies, who must scramble to add detection for these attacks.

It's become so bad that one security company, SourceFire, is holding an exclusive Adobe Hater's Ball on Wednesday here at the Black Hat security conference in Las Vegas.

The Ball is really a tongue-in-cheek joke, modelled on comedian Dave Chappelle's Playa Hater's Ball.

"My guys have a love-hate relationship with the guys over at Adobe," said SourceFire Director Matt Watchinski. "Every time a vulnerability comes out of their stuff, we have to jump."

Arkin said he and other Adobe researchers will be at the event.

With Adobe jointing the MAPP program, however, security companies like SourceFire should do less scrambling.

MAPP gives them early notice on upcoming patches -- typically about 48 hours -- so they have more time to build attack detection into their security systems. About 65 security companies participate in MAPP. All of them will soon start getting the Adobe data.

This is the first time that Microsoft has extended the MAPP program to cover another company's products, said Dave Forstrom, a director with Microsoft's Trustworthy Computing group.

However, it may not be the last. Forstrom didn't rule out the possibility that other software vendors could also jump on board.
Cities Rent Police, Janitors to Save Cash
Faced with a $118 million budget deficit, the city of San Jose, Calif., recently decided it could no longer afford its own janitors. So the city's budget called for dropping its custodial staff and hiring outside contractors to clean its city hall and airport, saving about $4 million.

To keep all its swimming pools open and staffed, the city is replacing some city workers with contractors.

"These are cases where the question is being asked, 'Is this a core service at the city level?' " said Michelle McGurk, senior policy adviser to the San Jose mayor.

After years of whittling staff and cutting back on services, towns and cities are now outsourcing some of the most basic functions of local government, from policing to trash collection. Services that cities can no longer afford to provide are being contracted to private vendors, counties or even neighboring towns.

The move saves cities budget-crushing costs of employee benefits like health insurance and retirement. Critics say contracting means giving up local control and personalized services.

Cities say they have little choice. Municipalities across the U.S. will face a projected shortfall of $56 to $86 billion between 2010 and 2012, according to a report from the National League of Cities.

"You can do across-the-board cuts for only so long," said Andrew Belknap, Western Regional Vice President for Management Partners, a government consulting group. "It's gone from the tactical cost cutting to get through a recession, to in some cases saying we have to exit that business or service altogether."

Maywood, a tiny city southeast of Los Angeles, is taking contracting to the extreme. The city of around 40,000 is letting go of its entire staff and contracting with outsiders to perform all city services. The city is disbanding its police force and handing public safety over to the Los Angeles County Sheriff. Its neighbor, the city of Bell, will take over running Maywood's City Hall.

Like many towns, Maywood is battling a budget deficit. But city officials said they were forced into the situation when the city's insurance carrier decided to cancel coverage because of the $21 million in legal expenses and judgments against the city stemming from the conduct of its police department. Without insurance, the city is barred from hiring employees who work directly for the city.

"We're on the cutting edge here. We're the tip of the spear," said Magdalena Prado, Maywood's community-relations officer, who works for the city as a contractor. Ms. Prado said she has gotten inquiries from cities across the country "wanting to know how this is going to play out. They're facing their own financial strains and looking to us as an example."

Maywood officials insist services will continue. The city has for years used contract workers to run services such as parks and recreation.

But not every transition is smooth, and city employees losing their jobs are seldom eager to help their replacements take them over.

Cities can face expensive lawsuits or severance costs when they lay off employees, although these costs differ in every city, depending on the union contract, the number of people losing their jobs and whether the contractor is willing to hire the former employees.

Maywood city council member Felipe Aguirre said the city negotiated severance packages with civilian employees, but the former police officers have been more difficult. The police union attempted to stop the city from dismantling the department by filing a temporary restraining order. The order wasn't granted, and the police department was disbanded July 1.

Los Angeles County Sheriff's Department officials say they are getting more inquiries from cities and towns who want to pay them to take over local policing. The Los Angeles County Sheriff's Department has policing contracts with 42 out of 88 cities in the county.

Lakewood, a small city near Long Beach, is known nationally for developing a model city structure, known as the Lakewood Plan, that contracts out some major services while maintaining local control over others. The city contracts 40% of its services to outside vendors, including public safety, which is run by the Los Angeles County Sheriff's Department. Other areas it continues to handle itself, including parks and recreation, city-hall administrative services and the water department.

Outsourcing is on the rise around the country. Johns Creek, an Atlanta suburb that incorporated in 2006, contracted all of its city-hall and public-works services with CH2M Hill, a Denver company that provides everything from staff to furniture. The city maintains its own fire and police departments, and employs its own city manager and finance director.

"The county was hard-pressed to provide the services here that people wanted and expected," said Doug Nurse, the city's spokesman, and a CH2M Hill employee. "We had everything in place. We were good to go."

In California, the state's $19 billion budget deficit is putting additional pressure on local governments. The state has begun to reduce the amount of redevelopment funds cities have traditionally received; Pasadena, for instance, had to hand $10.8 million back to the state.

In Long Beach, city officials are considering a plan to help close an $18.5 million budget deficit by hiring a private contractor to manage city marinas.

"We're trying to focus on core services so that non-core services can be eliminated eventually," said David Wodynski, the assistant director of financial management for Long Beach.

A recent Nevada state law requires cities and counties to study consolidating services and provide detailed analysis to lawmakers by September.

The Los Angeles suburbs of Burbank, Glendale and Pasadena are contemplating merging services such as tree-trimming, employee training, purchasing and police helicopters. All three face deficits, and reductions in state funds. The cities have already started a joint emergency-dispatch center that has grown to include other cities.

Glendale has faced an $8 million shortfall on a $170 million annual budget for the last three years, said city manager Jim Starbird, and has already cut police and fire personnel.

"We have to find ways to reduce the costs of services we provide," he said. "We can't just keep cutting services."
Cost Savings-Yet Another Reason to Outsource IT Locally
Maintaining the network. Ensuring remote users have access to resources. Updating virus definitions. Troubleshooting email problems. Any number of IT issues can arise on a daily basis.

So does it make sense for small businesses to hire a full-time IT staffer or outsource their IT needs? That depends. Weighing the pros and cons in both scenarios can help determine which option is likely to best serve small businesses.

In-House IT Support: Pros
Easy access: A tech support person on staff can address issues immediately. Other clients won't be competing for your IT staffer's time, though there may be other departments doing so.

In-House IT Support: Cons
Upfront and hidden costs: Hiring a full-time IT professional is an expensive endeavor. Providing that pro with a computer, desk, telephone extension, payroll account and benefits drives the cost up even higher. For many small businesses, having a full-time IT specialist with a full-time salary working on staff is too cost-prohibitive to even be considered a viable option. Not to mention the costs associated with ongoing training for IT personnel.

Limited technological expertise: Your IT specialist may be good with Excel and handy when it comes to figuring out why the printer isn't working, but may not be as savvy when it comes to diagnosing network security issues or upgrading the Exchange server. It's unlikely that one IT professional will be able to provide expertise for all of your technological needs. If having one full-time person is costly, you might not want to calculate the cost for a small team of specialists!

Outsourcing IT Support: Pros
Less expensive: All things considered, outsourcing tends to be less expensive than hiring a full-time IT employee in-house. Many costs -- such as overhead -- are spread over several clients via the agency model. Additionally, your small business doesn't have to worry about costs associated with training or certifying IT staff.

Round-the-clock service: Most professional IT help desk or tech support firms offer their customers 24/7 access to tech support specialists, either by phone or through remote computer access. This means that you'll have someone to walk you through resetting your email password -- even at 2 a.m. What's more, if your main contact is sick, there will be a substitute that you can count on.

Outsourcing IT Support: Cons
Language or cultural differences: Struggling to understand your tech support specialist can make a frustrating situation even worse. Unfortunately, many small businesses choose offshore outsourcing as their least-expensive option, while not considering the time and aggravation spent on communication issues. This can be mitigated either by carefully interviewing various offshore firms and giving them a "test drive," or by hiring a local firm. The latter may also allow you to have the specialist on-site, which is highly recommended for handling most IT support needs.

Not part of the team: Because outsourced IT specialists are there only when scheduled or when you need them to fix a problem, you'll spend time bringing them up to speed when issues do arise or when you want them to provide advice on future technology initiatives. Again, there is a solution: Get an outsourced firm involved in your IT needs on an ongoing basis via "managed services." This way, the firm can help with routine help desk and tech support issues, and will be more fully plugged in to your needs and requirements when it comes time to upgrade the network.

And the Winner Is: Outsource Locally
Certainly, small businesses have a variety of options for solving their tech support issues. For most small businesses, however, outsourcing is the best option. Outsourcing tech support needs allows businesses to stay focused on their own core offerings without getting sidetracked on IT projects. It also allows businesses access to cutting-edge resources and expertise, without the costs typically associated with staying ahead of the technology curve.

For many small businesses, outsourcing to a local firm provides the right combination of cost savings, flexibility and round-the-clock support without the language or cultural issues that sometimes arise with offshore firms. Outsourcing locally also provides small business owners peace of mind that when they need on-site tech support, they can get it, thus allowing them to manage their business, not their network.
5 Highly Avoidable Network Management Bungles
Ask any IT organization to identify the No. 1 cause of network performance problems, and they'll probably point to high-profile events: denial-of-service attacks, computer viruses, fiber cuts, power outages or hardware failures. However, studies show that more than two-thirds of network issues are actually tied to a simple everyday activity: The ungoverned process of IT staff making network configuration changes.

Change is an opportunity for mistake. Internal errors -- often inadvertent -- can take a heavy toll on overall network performance. This is a serious IT challenge, especially because spending massive amounts of time on last-minute fire drills can be a huge burden. IT organizations typically spend anywhere from 60 percent to 85 percent of their time doing unplanned work, most of which is reactive, time-consuming troubleshooting.

Check out the following five network management pitfalls. Are you a victim? If so, you're wasting precious resources, which could be spent focusing on more strategic activities.

And remember: change happens. Just be prepared.

Mistake 1: Manage Change Reactively
Change management is not the same as managing change. These two processes are different but complementary. Change management is how IT departments develop, request, schedule and implement change to network devices. Once it's implemented, this process is complete.

Managing change is the process of understanding a change's impact on network health and compliance. Even the smallest, most routine update can knock an entire network out of compliance. Instead of waiting for user complaints to come flooding in, this process finds potential issues before performance degrades.

Organizations should automate both processes: change management and managing change. A best practice includes both 1) a change management process that focuses on planning, scheduling and deploying the change, and 2) ensuring that the planned change is -- and remains -- a positive modification once it has been implemented.

Mistake 2: Too Many Manual Configuration Changes
Each time human hands interact with equipment, there's greater potential for error, even with experienced staff, and fat-fingered users can wreak serious havoc.

The benefit of custom-built scripts or programs is that they reduce the time-consuming, manual effort of collecting and storing configuration and change data. However, when additional scripts are needed over time, this adds to the complexity of the custom-built solution. On top of this is the added worry of staff turnover -- when the original creator leaves the organization, so does their knowledge. Custom scripts, then, can reduce the manual effort, but also grow unwieldy.

Automating the collection, storage, analysis and reporting of network change and configuration data not only reduces time and effort; it also lowers the risk of degraded service by reducing the number of individual, human touches. Automation empowers IT to focus on projects that can improve the overall success of the organization, instead of spending time on manual, repetitive tasks.

Mistake 3: Treat Performance Management and Change Management in Isolation
Almost every organization has tools that give visibility into network performance. The challenge is that these tools live in separate bubbles, frequently managed by different people. This becomes even more problematic once you start overlaying new technologies such as virtualization or cloud computing on top of the network infrastructure. Without a correlated view, IT must play a guessing game.

In more complex scenarios, a change doesn't immediately cause the network to exceed a monitoring threshold, so it doesn't trigger an alert. Hours, days, potentially even weeks later, the suboptimal configuration can combine with other factors, such as new usage patterns, to create unexpected network service degradation. In these cases, troubleshooting is especially tedious -- the root cause is likely buried in a stack of historical reports, and staff must play detective, slogging through every possible cause of performance degradation.

The most successful IT organizations tie network change views with network performance views. Instead of having multiple tools, a single system provides a correlated view, eliminating guesswork.

Mistake 4: Grant Too Much Administrative Access
Are you too trusting? There's a tendency to provide full administrative rights to any and all IT staff who manage devices. This is risky, especially once the list of "privileged" personnel grows to a substantial size.

IT folks usually make device modifications individually as they see fit, and often with the best of intentions. They think, "This is a small change, it won't impact anything. I'll just make it myself and not wait for the maintenance window." But keeping the IT team in the dark increases potential for an undesirable ripple effect -- one misconfiguration can affect a multitude of devices.

Organizations need to give access based upon individual roles and responsibilities. These should all be documented and managed from a central console. Success here hinges on giving appropriate levels of access and system views to each member of the entire IT team but avoiding overextending.

Mistake 5: Ignore the Impact of Change on Neighboring Devices
Ignorance is not bliss. One of the most frequent IT gaffes is taking a narrow, device-centric view when configuring an individual network component. It's crucial to correctly implement and understand how each modification impacts neighboring devices and overall network health.

For example, say there are several different help tickets. The respective issues are fixed, the necessary changes made, each modification is reviewed individually and appears fine. However, negative consequences can still be in store. One change can impact neighboring network devices, if the new configuration triggers a ripple effect that causes major problems as variations with users, applications or usage occur.

Instead of looking at devices in isolation, IT groups must view the impact of changes holistically, as well as on nearby devices. Note that using only a manual process, it's virtually impossible to determine the domino effect of a change across a complex, multi-device network. Successful organizations build automation into the process to identify potential issues before end users are affected.

In Conclusion
Change is inevitable, but organizations can take control of it in a way that reduces risk. The key takeaways are to move away from a reactive, troubleshooting approach to network change and configuration -- toward a proactive monitoring one, that limits risk by minimizing human configuration errors and greatly reducing the time and effort required to isolate and correct problems.

When it comes to networking, what you don't know definitely can hurt you.
Put Tough Questions to Your Hosting Provider (Before the Going Gets Tough)
By Eric Carsrud E-Commerce Times

Just how secure is your website? How tough is your webhosting provider's backbone? How vulnerable are you to cyberattacks like those that played havoc with Facebook and other sites in the past -- the victims of a rogue blogger?

While no site is absolutely safe from assault, it pays to be vigilant and to determine the strength of your webhosting provider's defenses and its tenacity. In fact, the average total per-incident cost of a data security breach last year was US$6.65 million, according to the Ponemon Institute. An equally important reason to assess your webhosting providers' defenses: It probably also secures your email system -- the lifeblood of most office operations and the most pervasive channel for business communication.

Email Problems Hurt the Most
The realities of modern communication are such that when your website goes offline, the calls and complaints that follow may be fewer than expected, depending on how long the site is down. If your email system goes down, you can expect countless complaints immediately, as well as claims that every second of downtime is costing your company big bucks. "If it breaks, you hear from everyone," is a common refrain among chief information officers about email system outages.

It's nearly impossible to calculate the cost of email downtime, although one computation puts the per-employee cost at about US$20 a year. Still, email must be reliable because of the potentially significant cost that downtime can carry with it. Data show that email systems are brought down more frequently by technological failures and human error than by cyberattacks.

So what are the questions you should ask your webhosting provider or prospective provider to get the best service and reliability? And what answers should you expect? The following advice -- and questions to ask -- should help you:

Mission-Critical Starting Points
Ask your provider what it does to prevent downtime. What does it do to thwart a failure of hardware? Does it have alternate routes to manage traffic? If the provider shares hosting, how simple does it keep its system to improve its ability to recover quickly from a problem? Does it keep spare equipment available within the same location? Are IT support-personnel handy? What monitoring systems are in place to alert you that trouble is approaching or a problem has occurred? The simpler the hosting model and the easier the process it employs, the more likely a provider can respond very quickly when a nasty outage or denial-of-service attack strikes.

Determine what backup systems are available should something go wrong with the servers. What security add-ons are available? Does the provider have physical firewalls, or are they software-based? How securely can you lock up your website, and what high-end products are available to do that? What type of flexibility do you have in making your own security modifications? What kind of security does the webhosting provider offer should you want to conduct business online? Do you have to purchase your own e-commerce SSL certificates, or does the webhosting provider offer them? As for investigating incidents that occur, find out if the prospective provider has a team that investigates security breaches or attempted ones. Does it generate root-cause analysis reports about such incidents?

Tackle Tech Support
Tech support is a key factor to consider. Does the provider have local IT support, or must you contact a call center when a problem develops? Is support available 24/7, is there an 800 number to call, and is service support free? Be sure to determine how you can communicate with tech support -- by phone, email or both? Is texting or chat available via your cellphone? How long, typically, does it take before someone answers the phone or responds to an email? How experienced is the tech support staff; how many years of experience, on average, does each support technician have? And what is its turnover rate? Does the webhosting provider offer customer forums to help you gain more knowledge about its services and about the industry? It may seem like overkill, but when you're in a pinch, you'll want answers immediately.

Ask a lot of questions about the hosting provider's service level agreement, or SLA. It's the contract between it and you that specifies, usually in measurable terms, what services the provider will furnish. First, determine if it even provides customers with an SLA. If it does, what metrics does the SLA specify? Does it include what percentage of the time services will be available? The number of users that can be served simultaneously? Specific performance benchmarks to which actual performance will be compared periodically? The schedule for notification in advance of maintenance and network changes such as code upgrades and security patches that may affect users? Help-desk response time for various classes of problems? And does the SLA have a money-back guarantee, defining the percentage of the month that the system must remain up or else you don't have to pay that month's fee?

Looking Ahead
Bring up the issue of business continuity in terms of whether the webhosting provider can adequately serve you as you grow larger or as you experience peaks in customer demand for your website. Business continuity used to be a major issue to explore, but providers increasingly are able to automatically move data among servers and add more server capability to handle growth in your Web traffic or sudden peaks that occur from time to time. Say you're having an online promotion soon; can the prospective provider put an additional one or two servers online to handle the increased traffic to your site?

Seek to determine how innovative the webhosting provider is and, as the Internet continues to grow dramatically, what it's developing or testing to enhance its clients' protection and security.

In today's tough economy, it also pays to find out how financially secure the provider is. Does it or a parent company have very large pockets to weather bad times economically? You don't want the hassle of suddenly having to find a new webhosting provider because your previous one went under quickly and without warning. Find out if the provider or its parent company owns its databases. The bottom line is you don't want to be left in the dark if a company goes out of business, and you want access to your data at all times.

As the Internet continues to expand -- and it will, substantially, over the next several years -- expect security management to grow ever more complicated, expensive, and important to you. This is why it's best to ask the tough questions now when quizzing your webhosting provider or a prospective one. It may very well save you headaches, and worse, down the road
Microsoft issues tool to repel Windows shortcut attacks
Gregg Keizer, Computerworld

Microsoft Corp. late Tuesday released an automated tool to stymie exploits of a critical unpatched Windows vulnerability that experts fear will soon be used by hackers against the general PC population.

However, the tool, like a manual procedure that Microsoft recommended last week, is only a makeshift defense, one that many users may resist applying, since it makes much of the Windows system, including the desktop, taskbar and Start menu, almost unusable.

The company posted a "Fix It" tool on its support site that automatically disables the displaying of all Windows shortcut files. Microsoft stepped users through the same technique last week in its initial security advisory, but at that time it told them that they had to edit the Windows registry. Most Windows users are reluctant to monkey with the registry, since a single error can cripple a computer.

Microsoft's single-click Fix It tool simply automates that process. Users must reboot their machines after applying the work-around, but IT administrators can configure the tool to install it while users are out of the office or not at their PCs.

The company admitted that applying the Fix It or the registry-editing work-around would "impact usability" of the machine, since both transform the usual graphical icons on the desktop and elsewhere into generic white icons, making it impossible to tell at a glance which represents say, Internet Explorer, and which stands for Microsoft Word.

Microsoft also revised its security advisory, originally published last Friday, to tell corporate administrators that they could defend against attacks by also blocking downloads of shortcut files -- identified by the ".lnk" extension -- and ".pif" files at the network perimeter.

The Windows shortcuts vulnerability was first described more than a month ago by VirusBlokAda, a little-known security firm based in Belarus. But it only began to attract widespread attention after security blogger Brian Krebs reported on it last Thursday. A day later, Microsoft confirmed the bug and admitted that small-scale attacks were already exploiting the flaw.

All versions of Windows contain the vulnerability, including the preview of Windows 7 Service Pack 1 (SP1), and the recently retired-from-support Windows XP SP2 and Windows 2000.

Hackers can craft malicious shortcut files that in turn automatically execute malware whenever a user simply views the contents of a folder containing the malformed shortcut. Initial reports noted that attacks were using infected USB drives to hijack Windows PCs running Siemens software that manages large-scale industrial control systems in major manufacturing and utility companies.

Siemens AG has confirmed that one of its customers, a German manufacturer it declined to name, had been victimized by an attack exploiting the shortcut bug.

Microsoft has promised to patch the problem, but it has yet to name a date. The next regularly scheduled security updates are due to ship in less than three weeks, on Aug. 10.

Researchers are split over Microsoft's expected timetable. But the Tuesday release of the Fix It tool is little help in parsing Microsoft's plans. The company released a similar tool in mid-June for a zero-day vulnerability that went public the day before, but it waited 32 days after that to deliver a patch. In March, however, Microsoft patched a critical Internet Explorer vulnerability just 18 days after issuing a Fix It to block attacks.
Microsoft may face tough patch job with Windows shortcut bug
Gregg Keizer, Computerworld

Microsoft may have a tough time fixing the Windows shortcut vulnerability, a security researcher said today.

A noted vulnerability expert, however, disagreed, and said Microsoft could deliver a patch within two weeks.

"The way Windows' shortcuts are designed is flawed, and I think they will have a very hard time patching this," said Roel Schouwenberg, an antivirus researcher with Moscow-based Kaspersky Lab.

Schouwenberg based his prediction that a patch may prove elusive on the fact that Microsoft has never faced a security issue with shortcuts, and thus has no security processes in place that it can quickly tweak.

For its part, Microsoft considers the flaw a security vulnerability, and has promised a patch. As of Tuesday, however, it had not set a timeline for a fix.

Microsoft has acknowledged that attackers can use a malicious shortcut file, identified by the ".lnk" extension, to automatically execute their malware by getting users to view the contents of a folder containing a malformed shortcut. The risk is even greater if hackers use infected USB flash drives to spread their attack code, since the latter automatically executes on most Windows PCs as soon as drive is plugged into the machine.

All versions of Windows are vulnerable to attack, including the just-released beta of Windows 7 Service Pack 1 (SP1), as well as the recently retired Windows XP SP2 and Windows 2000.

Attackers have exploited the shortcut bug to gain control of important computers at a customer of Siemens, the German electronics giant. Siemens last week alerted users of its Simatic WinCC management software of attacks targeting large-scale industrial control systems in major manufacturing and utility companies.

Time is also working against Microsoft.

"This may take them awhile to patch," said Schouwenberg. "But the wider-scale use of this is imminent."

Schouwenberg's last comment echoed those of other security experts Monday, when several organizations bumped up their Internet threat indicators in anticipation of impending attacks.

Another problem facing Microsoft is that the code is obviously old, making a quick patch that much more unlikely. The vulnerability exists in Windows as far back as the Windows 2000 edition, which Schouwenberg has tested and successfully exploited.

Schouwenberg compared the age of the code to that which Microsoft was forced to patch in the WMF (Windows Metafile) image format and Windows' animated cursor (.ani) file formats, in 2006 and 2007, respectively.

In both those cases, Microsoft issued emergency patches -- dubbed "out-of-band" or "out-of-cycle" -- outside its usual monthly schedule.

"I'm quite amazed that [the shortcut] bug hasn't been found before by researchers or by Microsoft," said Schouwenberg. "I would have figured that Microsoft would have caught this. But the fact that it's tied so closely with the OS may have been a problem."

Other researchers disputed Schouwenberg's assertion that a patch would occupy Microsoft for a long time.

"My guess is they will address this out-of-band and within two weeks, based on the exploits in the wild and the press coverage of the Siemens' software hack," said HD Moore, the chief security officer of Rapid7 and the creator of the well-known Metasploit hacking toolkit, in an e-mail reply to questions Tuesday.

An exploit of the shortcut flaw was added to Metasploit Monday, and Moore has been tweaking it since. Today, he said he was able to modify the exploit to create a true drive-by attack, where Windows PCs would be immediately compromised if their users were duped into browsing to a malicious Web site.

"It's always possible that Microsoft will find some very clever idea that will let them patch this quickly," said Schouwenberg.
A Brief History of Encryption
Barry K. Shelton and Chris R. Johnson, TechNewsWorld

Threats to computer and network security increase with each passing day and come from a growing number of sources. No computer or network is immune from attack. A recent concern is the susceptibility of the power grid and other national infrastructure to a systematic, organized attack on the United States from other nations or terrorist organizations.

Encryption, or the ability to store and transmit information in a form that is unreadable to anyone other than intended persons, is a critical element of our defense to these attacks. Indeed, man has spent thousands of years in the quest for strong encryption algorithms.

This article focuses on the Advanced Encryption Standard, or AES, the de facto standard today for symmetric encryption. In order to securely send data using a symmetric cipher, the sender and receiver use the same cryptographic key for encryption and decryption, respectively. Therefore, it is essential to maintain the key in a secure manner to avoid compromise.

Goodbye DES
AES is standardized as Federal Information Processing Standard 197 (FIPS 197, available here) by the National Institute of Standards and Technology (NIST), a non-regulatory federal agency. Prior to AES, the Data Encryption Standard (DES) became the federal standard for block symmetric encryption (FIPS 46) in 1977.

DES was based on an algorithm developed by IBM (NYSE: IBM) and modified by the National Security Agency (NSA). DES was considered unbreakable in the 1970s except by brute-force attack -- that is, trying every possible key (DES uses a 56-bit key, so there are 256, or 72,057,594,037,927,936 of them). By the late 1990s, however, it was possible to break DES in a matter of several days. This was possible because of the relatively small block size (64 bits) and key size and advances in computing power according to Moore's Law.

This achievement signaled the end of DES; although Triple DES, or DES repeated three times with different keys and therefore essentially a 168-bit key, is still acceptable for federal use until 2030.

Hello AES
In January 1997, NIST announced a competition for the successor to DES. To allay the suspicions that the NSA had placed "back doors" in DES, the competition was to be open and public, and the encryption algorithm was available for use royalty-free worldwide. The criteria included not only cryptographic strength (resistance to linear and differential cryptanalysis) but also ease of implementation and performance in software and hardware.

Over the course of three competitive rounds and intense cryptanalysis by the world's foremost experts on encryption, NIST selected the winner, the Rijndael (pronounced "Rhine doll") algorithm of Belgian cryptograhers Joan Daemen and Vincent Rijmen in October 2000. FIPS 197 was published on Nov. 26, 2001, and is the symmetric cipher of choice for government and commercial use today. Although originally approved for encryption of only non-classified governmental data, AES was approved for use with Secret and Top Secret classified information of the U.S. government in 2003.

Fundamentals of AES
AES is a symmetric block cipher, operating on fixed-size blocks of data. The goal of AES was not only to select a new cipher algorithm but also to dramatically increase both the block and key size compared with DES. Where DES used 64-bit blocks, AES uses 128-bit blocks. Doubling the block size increases the number of possible blocks by a factor of 264, a dramatic advantage over DES.

More importantly, in contrast to relatively short 56-bit DES key, AES supports 128-, 192-, and 256-bit keys. The length of these keys means that brute-force attacks on AES are infeasible, at least for the foreseeable future. A further advantage of AES is that there are no "weak" or "semi-weak" keys to be avoided (as in DES, which has 16 of them).

Details of AES Operation
AES is based on a substitution-permutation network, in which the input data (also called "plaintext") and the cryptographic key are successively processed by substitution boxes (S-boxes) or permutation boxes (P-boxes), in mathematical operations. The 128-bit input block is divided into a 4x4 array of 8 bits (one byte) each, called the "State."

The S-boxes effectively substitute one 8-bit number for another and were designed in such a manner that a change in one bit at the input changes at least half of the output bits, known as the "avalanche property." In contrast, the P-boxes permute, or shuffle, the 8 input bits to produce an 8-bit output. The mathematical operations of the S-boxes and P-boxes are organized in a number of successive "rounds."

Each round in AES is comprised of four transformations, or operations. These are called "SubBytes," "ShiftRows," "MixColumns," and "AddRoundKey." All bytes in AES, including the key, are considered to be finite field elements, not numbers, for purposes of the mathematical operations within these transformations. Specifically, the finite field in AES is defined as a Galois field of size 28, with 256 elements. FIPS 197 specifies the mathematical details of AES implementation.

Each round takes two inputs: the previous State and a Round Key. The Round Key is derived from the cryptographic key according to the Rijndael key schedule as defined in FIPS 197. There are a variable number of rounds according to the AES key length: 10 rounds for 128-bit keys, 12 rounds for 192-bit keys, and 14 rounds for 256-bit keys. The final round omits the MixColumns transformation; otherwise all of the rounds perform the same four transformations.

The result of the State after the final AES round is "ciphertext," which bears no resemblance to the plaintext. Decryption of the ciphertext is the inverse of the encryption steps, using the same symmetric key used for encryption.

Success of AES
The open and collaborative process of selecting Rijndael as the AES algorithm was an unprecedented opportunity to ensure that it would withstand many types of sophisticated attacks. In their book The Design of Rijndael, its inventors discuss potential attacks against AES using truncated differentials, saturations attacks (also known as "square attacks"), Gilbert-Minier attacks, interpolation attacks, and related-key attacks.

Although related-key attacks are theoretically the most promising, they are still infeasible from a practical standpoint. To date, most attacks have focused on weaknesses or characteristics in specific implementations, called "side-channel attacks," not on the algorithm itself. Beyond its cryptographic strength, AES is efficiently realized in hardware and software, an important consideration given its widespread adoption. The careful design and intense scrutiny paid to AES has resulted in an unqualified success -- not only for today but decades into the future.
Apple to give iPhone 4 users free cases to remedy antenna woes
By Nancy Blair, Jefferson Graham and Brett Molina, USA Today

Apple CEO Steve Jobs says all iPhone users will receive a free case to help alleviate reception issues with their popular smartphones.

For users still unsatisfied with the fix, Jobs says they have until September 30 to return the phone for a full refund.

"We're not perfect, and phones aren't perfect either, but we want to make all of our users happy," Apple CEO Steve Jobs said.

Friday's announcement follows a recent Consumer Reports blog post in which they said they could not recommend the iPhone 4 because of problems with the antenna located on the bottom left of the device.

The return rate for iPhone 4 is 1.7%, compared to 6% for its predecessor the 3GS, Jobs said. Only 0.55% of iPhone 4 owners have called in about reception issues, he said.

Jobs also said the reception issues aren't unique to iPhone 4, using the BlackBerry Bold 9700 and HTC Droid Eris as examples of other smartphones that experience significant drops in signal strength when held a certain way.
New Rules Lay Out 'Meaningful Use' Requirements for Electronic Health Records
Kimberly Hill, TechNews World

Two companion healthcare IT rules were announced this week. One regulation defines the minimum requirements that providers must meet through their use of certified electronic health records technology in order to qualify for payments under the HITECH Act. The other identifies standards and criteria for the certification of electronic health records technologies.

The U.S. Department of Health and Human Services has released its final version of rules that define the parameters under which physicians and hospitals can qualify for funding to upgrade their electronic medical records systems. Included are new definitions for what will constitute "meaningful use" of electronic records to meet objectives of the programs.

Changes have been made to previous drafts of the meaningful use criteria, which received comments from healthcare providers and other interested citizens numbering in the thousands. The new rules include more flexibility, Joseph Kuchler, spokesperson for the U.S. Department of Health and Human Services, told TechNewsWorld. This was the primary area of concerns for those who had objections to earlier versions of the criteria.

Economic Stimulus and Healthcare Reform
Two companion final rules were announced this week. One regulation, issued by the Centers for Medicare & Medicaid Services, defines the minimum requirements that providers must meet through their use of certified electronic health records (EHR) technology in order to qualify for payments under the Health Information Technology for Economic and Clinical Health Act of 2009, better known as the "HITECH Act."

The other rule, issued by the Office of the National Coordinator for Health Information Technology (ONC), identifies standards and criteria for the certification of EHR technologies. The first rule will thus have a greater impact on healthcare providers themselves, while the second will more greatly affect the makers of EHR systems.

Compliance is no small matter to healthcare professionals experiencing an increase crunch in this difficult economy. As much as US$27 billion may be expended in EHR incentive payments over 10 years. Eligible healthcare professionals may receive as much as $44,000 under Medicare and $63,750 under Medicaid, and hospitals may receive millions of dollars for implementation of certified EHRs under both Medicare and Medicaid.

Unintended Consequences
Simpler rules are crucial to achieving the government's stated goals for the program, Bruce Carlson, publisher of Kalorama Information, told TechNewsWorld.
The first set of meaningful use criteria seemed unnecessarily complicated and perhaps a bit rigid, he noted. While the federal government must, of course, take leadership in pushing forward the efficiencies and cost savings promised by HER systems, overly difficult regulations can have the unintended consequence of providing disincentives rather than encouraging healthcare providers to upgrade their systems.

The final rule for meaningful use divides the objectives EHR use must meet into a "core" group of required measurements and a "menu set" of procedures from which providers may choose any five to defer, said the Department of Health and Human Services.

The incentive payment will be implemented over a multi-year period, phasing in additional requirements that will raise the bar for performance on IT and quality objectives in later years. This will allow physicians and hospitals, along with their IT staffs and consultants, to assess where they stand currently, so they can then tailor their upgrade process to individual practice situations.

It's an important change, said Carlson, because healthcare providers vary widely in their current use of EHR technology.

Less than 40 percent of the overall population of physicians currently use EMR technologies, based on recent surveys, he noted. However, even among those who do, only a very small percentage are using the kind of advanced, integrated EMR processes that the federal government would like to see adopted across the board in the U.S. healthcare system.

About 10 to 15 percent of physicians routinely do the majority of their tasks using electronic records, Carlson stressed. Thus, software companies, consultants and IT professionals have their work cut out for them as they seek to bring widely divergent business processes and levels of automation up to universal federal standards

Typhoid Adware: Coming From a Laptop Near You
How much do you trust the other computers running on the same public WiFi system you're using? So-called Typhoid adware, a new variety studied by researchers at the University of Calgary, tricks nearby computers into accepting an unknown host computer nearby as a legitimate WiFi connection. The host computer then delivers annoying ads to the phony network of victim laptops.

A yet-unseen malware variant dubbed "Typhoid adware" could allow cyberattackers to prey on portable computer users tethered to unsecured WiFi connections at Internet cafes and other public places.

This potential threat is lurking wherever consumers gather to use free Internet access points. The hidden new threat has none of the telltale symptoms of traditional infections, and it functions as a twist on the notorious "Man-in-the-middle" vulnerability, according to a team of computer science researchers at Canada's University of Calgary.

These researchers named this potential threat after Typhoid Mary. The malware resembles the typhoid fever carrier who spread the disease to dozens of people in the New York area in the early 1900s.

Adware is software code that users inadvertently allow into their computers when they download infected files like fancy toolbars or free screen savers, or when they visit infected Web sites. Typhoid adware needs a wireless Internet cafe or other area where users share a non-encrypted wireless connection.

"We've not yet seen it in the wild. But it is something we are expecting to see. The reason is so many people bring their computers to centralized wireless locations. The bad guys are interested in making money, so centralized locations are a great opportunity for them," John Aycock, associate professor in the computer science department at the University of Calgary, told TechNewsWorld.

Speculative Origins
His research team devised the concept behind the Typhoid adware attack as part of a proactive computer security study, said Aycock.

"We try to figure out what the bad guys are going to do before we see it in the wild," he said. It is a proof of concept malware that has not yet been found in the wild. But the potential for use is very likely.

Aycock coauthored a paper on the so-called Typhoid adware threat with assistant professor Mea Wang and students Daniel Medeiros Nunes de Castro and Eric Lin. The paper demonstrates how Typhoid adware works as well as presents solutions on how to defend against such attacks. In May, De Castro presented it at the EICAR conference in Paris, a conference devoted to IT security.

What It Does
Typhoid adware tricks nearby computers into accepting an unknown host computer nearby as a legitimate WiFi connection. The host computer then delivers annoying ads to the phony network of victim laptops.

Typically, adware authors install their software on as many machines as possible. But Typhoid adware comes from another person's computer and convinces other laptops to communicate with it and not the legitimate access point, Aycock explained.

Then the Typhoid adware automatically inserts advertisements in videos and Web pages on the other computers. Meanwhile, the owner of the infected host computer does not see any of the ads and thus does not know the computer is infected.

Why worry about ads sent from one laptop to another? Ads are annoying, but they can also advertise rogue antivirus software that is harmful to the user's computer. That makes ads the tip of the iceberg, Aycock warned.

Not So Fast
Not all security researchers are convinced that Aycock's fears about an imminent Typhoid adware outbreak are justified.

"About 90 percent of viruses, worms and malware were proof of concept and never made it into the wild," Tracy Hulver, executive vice president for products and marketing at netForensics, told TechNewsWorld.

While not a new concept, the premise behind the Typhoid adware attack gives us a good reason not to use public WiFi connections, noted Catalin Cosoi, lead online researcher for Bit Defender.

"There probably will be some attempt by hackers to use Typhoid. But as it is now, I don't see any big threats from it," Cosoi told TechNewsWorld.

Linux Users Beware
If attackers took advantage of the Typhoid adware's potential, they could blur the line between Linux security and Windows vulnerability. The tools used to develop the proof of concept are part of an open source Linux package called "Dsniff," according to Chet Wisniewski, senior security advisor at Sophos.

"The concept is interesting. If it were developed a bit more it could pose a nasty threat," Wisniewski told TechNewsWorld.

The Dsniff package, written by Dug Song, is a packet sniffer and set of traffic analysis tools. The tool decodes passwords sent in cleartext across a switched or unswitched Ethernet network.

Similar tools are not available to make a Windows host, so the attacker would have to be a Linux user. But Windows boxes nearby would be at risk to receive ads, said Wisniewski, an avid Linux user.

Tricky Typhoid
The Typhoid adware threat, if it becomes one, presents a different situation for defenders. It also gives attackers a different business model.

"Protecting against Typhoid is a bit tricky because of the way it works. Normally if you have an adware infection you would see a bunch of ads popping up, and you would know something is there. Typhoid adware is different and a lot sneakier," said Aycock.

Instead of showing ads on the computer where it is installed, Typhoid shows ads on computers that are around it by hijacking their Internet connections, he explained. That makes it challenging to convince computer users they have a problem.

If you are seeing the ads, you don't have anything to detect. If you are not seeing any ads, you might find it hard to believe that you have something on your computer.

Typhoid Defenses
Typhoid adware is designed for public places where people bring their laptops, noted Aycock. It is far more covert and displays advertisements on computers that do not have the adware installed, not the ones that do.

"No good defensive solutions have been proposed. Each suggested solution has a down side," warned Cosoi.

Laziness could work against Typhoid -- having to sit near other computer users to push the infection may limit the need for defenses, he suggested. Other kinds of attacks are available that provide far greater results a lot easier.

Proactive Fighting
Aycock and his fellow researchers have devised a few defenses against Typhoid adware. One way is to protect the content of videos to ensure that what users see comes from the original source. Another way is to make laptops recognize that they are at an Internet cafe so they will be more suspicious of contact from other computers.

A proactive approach to security involves having the laptop look for signs of a hijack in a public location, according to Aycock. An analogy is that when you are home you know you are safe. If you go outside you know you have to be more cautious. But computers don't have that same sense.

Another approach is to target computers that might have something like Typhoid on them. That goes back to protections like traditional antivirus software, he noted.

Switchable Options
So far Aycock's researchers succeeded in implementing a type of software switch. It warns a laptop with an active WiFi connection to be less trusting of what other computers connected to its Internet connection are telling it.

"This is something that we've been able to do in the lab. This isn't something for regular users at this point," said Aycock.

The defensive switch has to be incorporated into regular antivirus software protections. It could also be integrated into the laptop's firewall software, he said.

Much of the process for getting a Typhoid defensive into play is a wait-and-see process. It starts with getting the vendors interested.

"That's one reason we did the paper and presented it at the conference. The audience was a mix of academia and industry. So it is a good venue to advertise this work to those people," said Aycock.
Sony Recalls Half a Million Too-Hot-to-Handle Notebooks
Owners of Sony Vaio notebook computers should check to see whether their devices appear on a recall list the company has issued due to overheating components. No injuries have been reported, but the computers may become hot enough to burn skin. Sony has issued a software patch it says will fix the problem.

Sony (NYSE: SNE) has issued a voluntary global recall of 535,000 Vaio notebook computers after reports of a defect in the devices' temperature control software was brought to its attention. The flaw can cause the notebook to overheat, sometimes to an extreme degree.

The recalled products are from the VPCF11 Series and VPCCW2 Series notebook computers, slightly less than half of which are in the U.S.

Sony has also offered a firmware patch for the defect, Stacey Palosky, spokesperson for the U.S. Consumer Product Safety Commission, told TechNewsWorld.

Battery Burn
Sony received 30 reports of units overheating to the point where the keyboards and casing became deformed, she noted, pointing to the report offered by CPSC on the recall. No injuries have been reported, but the defect has the potential to cause the notebook to overheat so much it could cause skin burns, she said.

The models, which were manufactured in the U.S. and China, have been sold at stores including Best Buy (NYSE: BBY), Costco (Nasdaq: COST), Fry's, Amazon.com (Nasdaq: AMZN) and Sony Style retail stores, as well as the SonyStyle.com Web site, between January 2010 and April 2010. They retailed for between US$800 and $1,500.

Consumers can either download the fix or arrange for Sony to pick up the device have it installed for them.

Other Recalls
Sony is one of many other electronic device manufacturers that has had to recall products because of overheating or even possible fire safety issues. In 2008, the company recalled 400,000 notebook computers -- again from its Vaio line -- because they were overheating. It had similar problems in 2006 with lithium batteries that were overheating and even exploding. That particular episode affected more than just Sony, however -- other PC vendors that used the defective batteries included Dell (Nasdaq: DELL), HP (NYSE: HPQ) and Apple (Nasdaq: AAPL), leading to their own recalls as well.

The problems are not limited to notebook or laptop computers. Cellphone devices have experienced similar problems -- perhaps a more worrisome development as they are more likely to be close to a person's body than a notebook.

One of the more tragic incidents was a case three years ago in which a man's cellphone reportedly caught on fire when it was in his pocket, causing significant injury. There have also been accounts of fires on airplanes due to lithium batteries.

Supply Chain Problems
Although this is apparently a software problem, and thus presumably better under Sony's control, in general, global device manufacturers can expect to have to deal with more recalls as supply chains become more convoluted.

A number of high-profile recalls have occurred this year that are not directly tied to the core product but rather to packaging or to components or ingredients, Mike Rozembajgier, vice president of recalls for ExpertRECALL, told TechNewsWorld. "This further highlights the need for manufacturers to have confidence in their supply chain partners," something that "becomes increasingly difficult as the supply chain becomes more complex," he said.
RIM's new BlackBerry Protect does remote backup

RIM's new BlackBerry Protect does remote backup
by Jessica Dolcourt, CNET
There have been a few third-party applications that provide some combination of remotely backing up, restoring, and locating an errant BlackBerry smartphone, but no in-house service crafted by BlackBerry maker Research In Motion itself. Not until now, that is.

On Monday, RIM introduced BlackBerry Protect, a free service that provides tools to remotely locate, backup, restore, and wipe the data from your phone with a system that's extremely similar to Microsoft's freemium My Phone service--introduced for Windows phones back in October 2009.

BlackBerry Protect is based on the BlackBerry ID that RIM revealed in late June (so, we're assuming this means that BlackBerry App World 2.0 should soon show its face). The service consists of a downloadable mobile app, and desktop and Web apps to control and manage the remote commands.

As with Microsoft's My Phone service, BlackBerry Protect includes the following remote commands:

Loud ring: This turns your phone onto the loudest setting, even when you're in silent mode, for those moments you're playing hide 'n' seek with your absent BlackBerry.

Locator: This GPS-assisted feature shows the phone's current location on a map--assuming it's on.

Remote lock: If you hadn't set a password on your device, it's not too late to password-protect it from afar.

Lost and Found: The "Lost and Found" feature appends a message to the start screen, so you can appeal to the Good Samaritan you hope will find and return your phone.

Remote wipe: If all hope is lost, BlackBerry Protect can also wipe data stored on the device and on the microSD card as well.

Back up data: You can configure BlackBerry Protect to routinely sync your contacts, calendar, browser bookmarks, memos, and tasks. to BlackBerry's servers. The service works over Wi-Fi or 3G.

Restore: Restock your data on a new phone. The feature is also useful for transitioning from an old BlackBerry device (that was never lost or stolen) to a new one.

Web interface: The Web interface for BlackBerry Protect has a couple more advantages over the on-phone app. Namely, the ability to monitor the status of remote management requests, and change your backup/restore and account settings.

More details: It will support up to five devices associated with one account, which is perfect for a family or small business. Also, for any remote command to work, the device needs to have some sort of data connection, either over the cellular network or over Wi-Fi. If the battery has been pulled, the service won't work. However, BlackBerry Protect will queue up commands for when connectivity returns.

BES compatibility: While the service is billed as a solution for both consumers and corporate users, BlackBerrys connected to BES (BlackBerry Enterprise Server) will override BlackBerry Protect, so as to safeguard the company's data.

Availability: Starting Monday, BlackBerry Protect will be available as a limited beta. Open beta and general availability "later on this year."

Phone support: BlackBerry Protect will work on the Bold 9000, 9700, and 9650; the Storm 9500 series and Storm 2; and the Curve 8900 and 8500 series, the Tour, and other BlackBerry phones running operating system 4.6 or higher.
How to keep Windows XP SP2 safer after Microsoft stops patching
Maybe you didn't get the memo: Tomorrow marks the end of patches for Windows XP Service Pack 2 (SP2).

And you're still running the nearly-six-year-old edition.

But XP SP2 won't shudder to a stop. Although Tuesday marks the support retirement of the service pack -- a date that some have called a "red alert" for people running SP2 -- that doesn't mean your copy of Windows will suddenly refuse to run.

It does mean that, after tomorrow, Microsoft will not offer any security patches, no matter how severe the vulnerability, no matter what part of Windows or associated component is involved. No more Windows patches -- and no more patches for Internet Explorer (IE), no patches for Windows Media Player, no patches for Outlook Express.

You can, of course, sidestep the whole problem by upgrading to Windows XP SP3, which will be supported until April 2014: Microsoft has posted a page that explains how to do that here. (Note: Because there is no SP3 for the 64-bit version of Windows XP, you'll continue to receive security updates if you're running SP2 of that edition.)

Among your options: Download and install SP3 via Windows Update, download a disk image for upgrading multiple machines or order a SP3 CD for $3.99.

In fact, you actually have four weeks to upgrade to SP3 before Microsoft releases the next likely XP patch on Aug. 10. There's little chance that Microsoft will issue an "out-of-band" emergency update before then.

But if you're committed to SP2, for whatever reason, and have no intention of upgrading anytime soon, there are steps you can take to make your PC more secure and your time on the Internet safer.

Dump Internet Explorer. After Tuesday, Microsoft won't be providing IE patches of any kind, for any version -- IE6, IE7 or even 2009's IE8 -- to people running Windows XP SP2.

But other browser makers aren't halting updates for their wares. Mozilla, Google, Apple and Opera will be shipping fixes for Windows XP versions of their Firefox, Chrome, Safari and Opera browsers for the foreseeable future.

More than a year ago, Mozilla debated whether to drop support for older editions of Windows, including Windows 2000 and Windows XP SP2. But the company decided against the move.

According to the system requirements for Firefox 4 Beta 1, the preview Mozilla released last week, the browser runs not only on Windows XP, but also Windows 2000. (Mozilla's systems requirement link for Firefox 4 currently takes you to the page for version 3.6.6, leading us to believe that the requirements will remain the same for Firefox 4, which is slated to ship in November 2010.)

And because Mozilla's policy is to continue supporting a browser with security updates for at least six months after the launch of its successor, moving to Firefox 4 down the road means that if the company ships Firefox 5, or whatever the next edition is called, a year later -- in November 2011 -- patches for it will be produced through May 2012 or later.

It's important to keep a browser up-to-date on patches because hackers continue to exploit browser vulnerabilities, particularly those in IE. They focus on IE bugs for a simple reason: Every Windows machine has it, and Microsoft's browser continues to be used by more people than any other.

Ironically, you may actual improve the security of your Windows XP SP2 machine if you dump IE.
Patch third-party programs, especially browser plug-ins. According to most vulnerability experts, it's not your operating system that today's attackers target: It's non-Microsoft software, particularly browser plug-ins.

Antivirus vendors McAfee and Symantec have both reported huge surges in attacks exploiting bugs in Adobe's Reader, one of the most widely-installed plug-ins. McAfee, for example, said that exploits of Reader jumped 65% in the first quarter of 2010 compared to 2009's total.

Those kind of numbers mean you should be spending more time patching third-party products, less time worrying about the inevitable vulnerabilities in Windows XP SP2 that Microsoft will no longer fix.

But that's tough: Most non-Microsoft software lacks automatic updating. Adobe, for instance, only instituted auto-updating for its regularly-exploited Reader and Acrobat in April -- and requires users to manually switch it on -- but it still hasn't offered the same functionality for its just-as-often-attacked Flash Player plug-in.

Stay safer. Without patches for the operating system, it's even more important than ever to practice safe computing.

Install antivirus software or a multi-component security suite if you don't have one on the PC already. If you do, keep it up to date by regularly downloading new signatures. Several AV programs, including Microsoft's own Security Essentials, are free.

Also, keep the firewall turned on -- easily done since Windows XP SP2 was the first Microsoft OS that not only included a firewall, but enabled it by default.

And remember the wisest advice: Don't steer to sites you're not sure can be trusted, don't open e-mails and attachments you didn't expect to receive, and don't download software from questionable sources.

We know, we know..., the same advice you've heard a hundred times.

Keep reading Microsoft's security bulletins. Just because your copy of Windows XP SP2 won't receive any more updates doesn't mean you should stop looking at the bulletins Microsoft publishes each Patch Tuesday.

Those bulletins may not strictly apply to XP SP2, but Microsoft often includes steps users can take to protect themselves if they're not able to deploy a patch. In the bulletins, that information is tucked under the subhead "Workarounds" beneath the information for each vulnerability.

The workarounds may include steps you can take with XP SP2 to deflect or hinder attacks. Obviously, your mileage may vary.

Microsoft's irregular security advisories -- generally issued as a prelude to an eventual patch -- also contain worthwhile information, including which Windows versions are affected, how attacks (if there are any at that point) are exploiting the bug and whether there are workarounds that can block or help block assaults.

Install Tuesday's patch. One of the four security updates slated for Tuesday applies to Windows XP SP2 -- the one that addresses the vulnerability a Google-employed security researcher revealed last month. You should, of course, grab it.
Putting The iPhone 4 to the Test
Though we've kept you informed about the ongoing debate with the iPhone 4's antenna, we haven't said what that really means for users. You may see your bars drop when you hold the handset a certain way, but how does that affect the phone's performance?

In data speed and call quality tests, we've seen significant changes when we cover the antenna gap on the handset's lower left side. Indeed, in one call quality test, the audio cut out completely when we covered the trouble spot. But to give those findings some context, we need compare the iPhone 4 with other AT&T devices.

In our experience from the hundreds of cell phones we've reviewed, attenuation on other handsets hasn't been as significant. But to be fair, we're going to test a few models again to see how Apple's device differs. Maybe it's worse, but maybe it's about the same.

In addition to call quality, we'll also be looking at the numbers of signal bars that the other handsets are displaying. Yes, we know the number of bars displayed isn't the most accurate meter--and you'll need to jailbreak a phone to get the Field Test feature back--but users rely on that information. And after Apple issues its software update to fix how bars are displayed on the screen, we'll run those comparison tests again.

Our goal is to tell you not only what is going on, but also how it affects you. Also, we want to find if the iPhone is alone in experiencing such attenuation problems.