Saturday, March 6, 2010

[Gd] Google @ ICST 2010

| More

Google Testing Blog: Google @ ICST 2010

I'll be a presenting a paper at ICST 2010 in Paris April 6-10 about how Google tests and builds software. Here's a pointer to the program if you are interested. Also here's a link to the abstract of the talk itself. I'll publish the paper after the talk here. Hopefully, I'll see some of you there!

Posted by Patrick Copeland

Friday, March 5, 2010

[Gd] Windows Beta Update

| More

Google Chrome Releases: Windows Beta Update

The Google Chrome Beta channel for Windows has been updated to This release fixes a few UI and stability issues:

More details about additional changes are available in the svn log of all revision.

--Orit Mazor, Google Chrome Team

[Gd] How Google does disaster recovery

| More

Google Apps Developer Blog: How Google does disaster recovery

Will you be ready when disaster strikes? It's an uncomfortable question for many IT administrators, because answering it with confidence usually requires boatloads of money, immense complexity, and crossed fingers. Fortunately there's a better way.

Taking email as an example, consider a few of the ways that companies protect their data from disruption. Ideally a typical small business backs up its email. They have a mail server, and copy the data to tape at regular daily or weekly intervals. If something goes wrong, they go to the tapes to restore the data that was saved before their last backup. But the information created after their most recent backup is lost forever.

In larger businesses, companies will add a storage area network (SAN), which is a consolidated place for all storage. SANs are expensive, and even then, you're out of luck if your data center goes down. So the largest enterprises will build an entirely new data center somewhere else, with another set of identical mail servers, another SAN and more people to staff them.

But if, heaven forbid, disaster strikes both your data centers, you're toast (check out this customer's experience with a fire). So big companies will often build the second data center far away, in a different 'threat zone', which creates even more management headaches. Next they need to ensure the primary SAN talks to the backup SAN, so they have to implement robust bandwidth to handle terabytes of data flying back and forth without crippling their network. There are other backup options as well, but the story's the same: as redundancy increases, cost and complexity multiplies.

Google Apps customers don't need to worry about any of this for the data they create and store within Google Apps. They get best-in-class disaster recovery for free, no matter their size. Indeed, it's one of the many reasons why the City of Los Angeles decided to go Google.

How do you know if your disaster recovery solution is as strong as you need it to be? It's usually measured in two ways: RPO (Recovery Point Objective) and RTO (Recovery Time Objective). RPO is how much data you're willing to lose when things go wrong, and RTO is how long you're willing to go without service after a disaster.

For a large enterprise running SANs, the RTO and RPO targets are an hour or less: the more you pay, the lower the numbers. That can mean a large company spending the big bucks is willing to lose all the email sent to them for up to an hour after the system goes down, and go without access to email for an hour as well. Enterprises without SANs may be literally trucking tapes back and forth between data centers, so as you can imagine their RPOs and RTOs can stretch into days. As for small businesses, often they just have to start over.

For Google Apps customers, our RPO design target is zero, and our RTO design target is instant failover. We do this through live or synchronous replication: every action you take in Gmail is simultaneously replicated in two data centers at once, so that if one data center fails, we nearly instantly transfer your data over to the other one that's also been reflecting your actions.

Our goal is not to lose any data when it's transferred from one data center to another, and to transfer your data so quickly that you don't even know a data center experiences an interruption. Of course, no backup solution from us or anyone else is absolutely perfect, but we've invested a lot of effort to help make it second to none.

And it's not just to preserve your Gmail accounts. You get the same level of data replication for all the other major applications in the Apps suite: Google Calendar, Google Docs, and Google Sites.

Some companies have adopted synchronous replication as well, but it is even more expensive than everything else we've mentioned. To backup 25GB of data with synchronous replication a business may easily pay from $150 to $500+ in storage and maintenance costs- and that's per employee. That doesn't even include the cost of the applications. The exact price depends on a number of factors such as the number of times the data is replicated and the choice of service provider.

At the low end a company might tier the number of times they replicate data, and at the high end they'll make several copies of the data for everyone. We also replicate all the data multiple times, and the 25GB per employee for Gmail is backed up for free. Plus you get even more disk space for storage-intensive applications like Google Docs, Google Sites and Google Video for business. Other companies may offer cloud computing solutions as well, but don't assume they backup your data in more than one data center.

Here are a few of the reasons why we're able to offer you this level of service. First, we operate many large data centers simultaneously for millions of users, which helps reduce cost while increasing resiliency and redundancy. Second, we're not wasting money and resources by having a data center stand-by unused until something goes wrong – we can balance loads between data centers as needed.

Finally, we have very high speed connections between data centers, so that we can transfer data very quickly from one set of servers to another. This let us replicate large amounts of data simultaneously.

One of the most compelling advantages of cloud computing is its power to democratize technology. Whether it's a 25GB email inboxVideo for business, synchronous replication, or one of countless other advanced services, Google Apps gives companies of all sizes access to technology that until recently was available to only the largest enterprises. And it's available at a dramatically lower cost than the on-premises alternatives, without the usual hassles of upgrading, patching and maintaining the software.

No one likes preparing for worst-case scenarios. When you use Google Apps, you have one less critical thing to worry about.

This was originally posted to the Official Google Enterprise Blog and is reposted here with permission.

Written by Rajen Sheth, Senior Product Manager, Google Apps


[Gd] Speech Input API for Android

| More

Android Developers Blog: Speech Input API for Android

People love their mobile phones because they can stay in touch wherever they are. That means not just talking, but e-mailing, texting, microblogging, and so on. So, in addition to search by voice and voice shortcuts like "Navigate to", we included a voice-enabled keyboard in Android 2.1, which makes it even easier to stay connected. Now you can dictate your message instead of typing it. Just tap the new microphone button on the keyboard, and you can speak just about anywhere you would normally type.

We believe speech can fundamentally change the mobile experience. We would like to invite every Android application developer to consider integrating speech input capabilities via the Android SDK. One of my favorite apps in the Market that integrates speech input is Handcent SMS, because you can dictate a reply to any SMS with a quick tap on the SMS popup window.

Speech input integrated into Handcent SMS

The Android SDK makes it easy to integrate speech input directly into your own application—just copy and paste from this sample application to get started. Android is an open platform, so your application can potentially make use of any speech recognition service on the device that's registered to receive a RecognizerIntent. Google's Voice Search application, which is pre-installed on many Android devices, responds to a RecognizerIntent by displaying the "Speak now" dialog and streaming audio to Google's servers—the same servers used when a user taps the microphone button on the search widget or the voice-enabled keyboard. (You can check if Voice Search is installed in Settings ➝ Applications ➝ Manage applications.)

One important tip: for speech input to be as accurate as possible, it's helpful to have an idea of what words are likely to be spoken. While a message like "Mom, I'm writing you this message with my voice!" might be appropriate for an email or SMS message, you're probably more likely to say something like "weather in Mountain View" if you're using Google Search. You can make sure your users have the best experience possible by requesting the appropriate language model: "free_form" for dictation, or "web_search" for shorter, search-like phrases. We developed the "free form" model to improve dictation accuracy for the voice keyboard on the Nexus One, while the "web search" model is used when users want to search by voice.

Google's servers currently support English, Mandarin Chinese, and Japanese. The web search model is available in all three languages, while free-form has primarily been optimized for English. As we work hard to support more models in more languages, and to improve the accuracy of the speech recognition technology we use in our products, Android developers who integrate speech capabilities directly into their applications can reap the benefits as well.


Thursday, March 4, 2010

[Gd] YouTube + You

| More

YouTube API Blog: YouTube + You

YouTube is an extremely team-oriented, creative workplace where every single employee has a voice in the choices we make and the features we implement. We work together in small teams to design, develop, and roll out key features and products in very short time frames. Which means - something you write today could be seen by millions of viewers tomorrow.

Despite being the world's largest online video site and part of Google, we still have a relatively small engineering group. We are looking to add a few key impact players to our team. Come see what it's like to work at YouTube!

Lisa Pisacane, YouTube Recruiting

[Gd] Still Stuck in the 90s

| More

Google Testing Blog: Still Stuck in the 90s

By James A. Whittaker

Flashback. It's 1990. Chances are you do not own a cell phone. And if you do it weighs more than a full sized laptop does now. You certainly have no iPod. The music in your car comes from the one or two local radio stations that play songs you can tolerate and a glove box full of CDs and cassettes. Yes, I said know the ones next to those paper road maps. Music on the go? We carried our boom boxes on our shoulder back then.

If you are a news junkie, you get your fix from the newspaper or you wait until 6 ... or 11. Sports? Same. Oh and I hope you don't like soccer or hockey because you can't watch that stuff in this country more often than every four years. Go find a phone book if you want to call someone and complain.

I could go on, and on, and on, but you get the point. Oh wait, one more: how many of you had an email address in 1990? Be honest. And the people reading this blog are among the most likely to answer that affirmatively.

The world is different. The last 20 years has changed the human condition in ways that no other 20 year period can match. Imagine taking a 16 year old from 1990 and transplanting him or her to a 2010 high school. Culture shock indeed. Imagine transporting a soccer mom, a politician, a university professor... Pick just about any profession and the contrast would be so stark that those 1990 skills would be a debilitating liability.

Except one: that of a software tester. A circa 1990 tester would come from a mainframe/terminal world. Or if they were on the real cutting edge, a locally networked PC. They'd fit into the data center/slim client world with nary a hiccup. They'd know all about testing techniques because input partitioning, boundary cases, load and stress, etc, are still what we do today. Scripting? Yep, he'd be good there too. Syntax may have changed a bit, but that wouldn't take our time traveler long to pick up. That GEICO caveman may look funny at the disco, but he has the goods to get the job done.

Don't get me wrong, software testing has been full of innovation. We've minted patents and PhD theses. We built tools and automated the crud out of certain types of interfaces. But those interfaces change and that automation, we find to our distress, is rarely reuseable. How much real innovation have we had in this discipline that has actually stood the test of time? I argue that we've thrown most of it away. A disposable two decades. It was too tied to the application, the domain, the technology. Each project we start out basically anew, reinventing the testing wheel over and over. Each year's innovation looks much the same as the year before. 1990 quickly turns into 2010 and we remain stuck in the same old rut.

The challenge for the next twenty years will be to make a 2010 tester feel like a complete noob when transported to 2030. Indeed, I think this may be accomplished in far less than 20 years if we all work together. Imagine, for example, testing infrastructure built into the platform. Not enough for you? Imagine writing a single simple script that exercises your app, the browser and the OS at the same time and using the same language. Not enough for you? Imagine building an app and having it automatically download all applicable test suites and execute them on itself. Anyway, what are you working on?

Interested? Progress reports will be given at the following locations this year:

Swiss Testing Day, Zurich, March 17 2010

STAR East, Orlando, May 2010

GTAC, TBD, Fall 2010

Here's to an interesting future.


[Gd] The forums, they are a-changin'

| More

iGoogle Developer Blog: The forums, they are a-changin'

iGoogle developers, your lives are about to get a bit easier. For the last few years, the iGoogle Developer Forum has been the place for gadget developers to discuss development of gadgets for iGoogle. Despite the name, the forum only provided help and answers for one of the two iGoogle APIs. For themes questions, developers turned to the Google Themes API group, fragmenting the development community in two.

Starting immediately, the iGoogle Developer Forum will now be host for all iGoogle developer discussion, for both gadgets and themes. The Themes API group will be put into a read-only mode in a few days, and the welcome text will include a reminder to everyone to visit the combined group.

In addition, we have created a new shared issue tracker for reporting issues with the Gadgets and Themes APIs. The igoogle-legacy tracker is to be used exclusively for issues pertaining to the deprecated legacy gadgets API, and will remain active until the API is no longer supported. All gadgets.* API, Themes API, and directory issues should be posted in the new issue tracker.

If you have any questions about these changes, please let us know in the forum.

Posted by Rob Russell, Developer Relations

[Gd] Dev Channel Update

| More

Google Chrome Releases: Dev Channel Update

The Dev channel has been updated to 5.0.342.1 for Windows, Mac, and Linux platforms and 5.0.342.0 ChromeFrame


  • Extension content scripts no longer run multiple times after fragment (ie #hash) navigations (Issue:35924)
  • Early version of Geolocation API now available with following caveats:
    1. To enable, run the browser with --enable-geolocation
    2. Wifi based location is only supported on Windows and Mac (not OSX 10.6 for now)
    3. Permissions are not persisted (will re-prompt every time) and associated UI is incomplete.

  • Improved plugin stability (Issue:35081,36928)
  • AutoFill Preferences UI Updates
  • Translate infobars are now implemented (Issue:34466)
  • Mac History menu now has favicons andno longer lists duplicates of Recently Closed sites
    (Issues: 20464 and 21314)
  • Added HTML5 databases to the Mac cookie manager (Issue: 35191)

Chrome Frame
Known Issues
  • Linux
    • Chrome crashes when setting prompt for cookies/data (Issue: 37426)
    • The "Show details" link on cookie/data prompt doesn't work (Issue: 37428)
More details about additional changes are available in the svn log of all revisions.

You can find out about getting on the Dev channel here:

If you find new issues, please let us know by filing a bug at

Karen Grunberg
Google Chrome

[Gd] Registration for Google I/O 2010 is now closed

| More

Google Code Blog: Registration for Google I/O 2010 is now closed

This year's conference is now sold out, which means we'll be seeing over 4,000 of you on May 19-20 at Moscone West! For those of you who can't join us in person, video recordings of all sessions and keynotes will be available on YouTube following the conference.

Continue to follow us on Twitter for updates on sessions, speakers and the Sandbox. We'll also continue posting updates and Google I/O-relevant content on this blog.

By Joyce Sohn, Google Developer Team

[Gd] Google Chrome Developer Update: 3000 Extensions, Events on 4 Continents and More

| More

Chromium Blog: Google Chrome Developer Update: 3000 Extensions, Events on 4 Continents and More

This is the first issue of Google Chrome Update for Web Developers. In these regular updates, we'll inform you about new features enabled in Google Chrome and announce updates of Google Chrome related developer events. We will also be sharing samples and highlighting cool extensions and HTML5 apps written by the developer community.

What's New in Google Chrome?

The Google Chrome Beta channel for Mac and Linux has been updated to 5.0.307.7 and the extensions gallery now offers more than 3000 extensions for users to choose from.

For Google Chrome extensions, we've just released a couple of new experimental APIs, including a history API. Since these are experimental APIs, the extensions gallery won't allow you to upload extensions that use them. However, we'd like to encourage you to read the documentation, give it a try, and send us your feedback.

Last but not least, the new Google Chrome stable release has many new HTML and JavaScript APIs including WebSockets, Notifications, and Web SQL Database. We are interested to hear how you've been using these APIs. Please share with us the cool applications you are building.

Samples and Tutorials

We've been working on creating samples to help you implement certain functionality in your extensions. You may be interested in viewing the source code for extensions that:
  • Merge all of the open tabs into a single window.
  • Use OAuth to connect to web services.
  • Make cross-domain XMLHttpRequests from a content script.
  • Display page actions based on the current URL or the current page's content.
Upcoming Events

We are pleased to announce that we will host a series of Google Chrome developer events over the next month in the following cities (dates in local time):

  • Sydney, AU - Mar 5th
  • Tokyo, Japan - Mar 11th
  • Austin, TX - Mar 14th - Mar 15th
  • London, UK - Mar 16th
    • Meetup, HTML5 and Google Chrome extensions (sign up here)
  • Madrid, ES - Mar 18th
    • Google Chrome hackathon @Universidad Complutense de Madrid (sign up here)
We also plan to hold events in Germany and will be announcing more information about those soon, so stay tuned!

Let Us Know What You Think

This is just the first post of the many Google Chrome developer updates, let us know what you would like to see in future updates and follow us on Twitter at @ChromiumDev.

Posted by Vivian Li, Developer Advocate

Wednesday, March 3, 2010

[Gd] SCVNGR and QR codes in location-based mobile gaming

| More

Google Code Blog: SCVNGR and QR codes in location-based mobile gaming

This post is part of the Who's @ Google I/O, a series of blog posts that give a closer look at developers who'll be speaking or demoing at Google I/O. This guest post is written by Seth Priebatsch, Chief Ninja of SCVNGR, who's creating a mobile game for the conference.

SCVNGR is a platform for quickly and easily building location-based mobile games. Each game is all about doing challenges at places. Go here and take a photo, go there and solve this riddle. You happen to be at this coffee shop? Awesome! Try this challenge and earn a couple points! SCVNGR powers games for all sorts of institutions ranging from Princeton to Harvard to the Smithsonian Institutes to SIGGRAPH and even the U.S Navy.

If you're attending Google I/O this year, you'll get to try out our mobile game at the conference! (Don't forget to bring your Android phone, if you've got one!) I'm not going to give it all away here, but I do want to talk about one of the especially cool features that we're rolling out using some neat Google APIs.

One of the biggest challenges that our game-builders face is how to build location-based challenges that truly verify the user has actually made it to the right location. There are some non-technical solutions to this problem, such as creating riddles that require the user to be there to solve them (i.e. what is the third word on the fourth line of the plaque on the back wall) or taking photos which are then verified manually by the community or the game developer themselves.

We've also looked at a number of more technical solutions. The most obvious being to take the geo-tagged coordinates of each of the locations with a game and then use GPS to ensure that the player is within a certain radius of the location. Unfortunately, GPS verification has issues when the locations are indoors (as many are) and can vary greatly in accuracy across difference devices.

A new option, one that we're launching in a couple of weeks, uses QR codes planted at locations within the game-board. Players must scan these QR codes to verify that they've made it to the right spot. We're using the Google Charts API as an easy way to programmatically generate QR codes. Of course, generating and planting the QR codes is only half of the equation. You've also got to be able to decode them from the phones during the game. We experimented with a couple of options as to how to best achieve this.

Our first thought was to simply have the players snap pictures of the QR code which we'd then post back to our server, decode and respond with whether or not that was in fact the right QR code (or a QR code at all). The benefit of this solution was only having to utilize one QR code processing library for both our iPhone and Android applications. But we ran into a couple issues right up front:

  1. The time cost incurred by having to transmit a reasonably high-resolution image to our servers.
  2. Most players aren't very good at taking pictures of QR codes. They move the lens of the camera very close and snap the picture. (It's actually best to take the picture from 12-24 inches away to enable the camera to focus sharply on the QR code.) This led to a high-number of failed submissions before we were actually able to recognize a valid QR code.

Add all this up and it created a pretty poor game-play experience.

So we turned to the ZXing project (pronounced Zebra Crossing) which is an open source barcode processing library written in Java and highly suited to the Android environment. Running the ZXing code right on the device rid us of the time-delay introduced by transmitting images to the server. But we still had the issue of the high-fail rate of the user snapping unsuitable images. Rather than trying to implement any form of "auto-scanning", we've chosen to simply grab images from the camera every 1/8 of a second or so, scan them and stop the process once a QR code is recognized.

As for the iPhone, ZXing has an objective-c port for the iPhone, but in order to grab images in real-time from the camera, you'll have to use a private API call for iPhone OS 3.1. Luckily, Apple has officially authorized its public usage until it's made public.

We're hard at work integrating QR codes into the great game that we're building for Google I/O and we hope you'll get a chance to play. The I/O team will let you know when the game is ready! The SCVNGR team will also be there in person, so please come by and say hello! We'd love to get your thoughts on the game or just chat about some of the Google APIs that we've used within SCVNGR.

By Seth Priebatsch, Chief Ninja of SCVNGR

[Gd] How to use Google Apps APIs to handle large files

| More

Google Apps Developer Blog: How to use Google Apps APIs to handle large files

Editor's Note: This post is the second in a series written by Michael Cohn and Steve Ziegler from Cloud Sherpas, a software development and professional services company. We invited Cloud Sherpas to share their experiences building an application on top of Google Apps utilizing some of our APIs. Cloud Sherpas will also be participating in the Developer Sandbox at Google I/O this May where they'll be demoing their use of Google technologies and answering questions from attendees.

SherpaTools Directory Manager allows administrators to easily manage User Profiles and Shared Contacts in the Google Apps directory. SherpaTools is built on Google App Engine (GAE) utilizing the Google Web Toolkit (GWT), and makes heavy use of the Google Apps Management and Application APIs to provide new administrator and end-user features.

Last week we wrote about how SherpaTools Directory Manager uses Two Legged OAuth for authentication, and GAE Task Queue API and the Google Datastore APIs to make it easy to divide large retrievals over long intervals into smaller chunks. This week we will discuss how we used the User Profile API to retrieve the data sets and the Document List Data API to populate a Google Spreadsheet.

The features in SherpaTools use a number of additional Google Apps APIs.  For example, contact data may be imported from or exported to Google Docs Spreadsheets.  In the case of export, contacts are retrieved via the User Profiles or Shared Contacts API and written to memcache.  Next, a new Spreadsheet is created via the Documents List Data API and the CSV byte stream is added as the media type. Below is a technical explanation of how we retrieved the User Profiles and exported them to a Google spreadsheet.

Retrieving and Storing User Profile Entries

The User Profile API makes it easy to divide retrieval of all profile entries into multiple fetch operations.  The one area to tweak for this call is the size of results that can reliably be returned within 10 seconds on Google App Engine.  In our testing, 100 entries per page is about the right size.  Once configured, we will retrieve the feed containing the first set of entries, parse the response, and persist the results to cache.  If that feed contains a link to another page containing more entries, we queue up a task to handle the next page and repeat until all pages are processed:
    ContactQuery query = null;
    //if this is the first page, there is no next link. Construct initial page url
    if (nextLink == null) {
        String domain = getDomain(loggedInEmailAddress);
        String initialLink = PROFILES_FEED + domain + PROJECTION_FULL;
        query = new ContactQuery(new URL(initialLink));
    } else {
        query = new ContactQuery(new URL(nextLink));
    query.setStringCustomParameter(TWO_LEGGED_OAUTH_PARAM, loggedInEmailAddress);
    //fetch next profile feed containing entries
    ProfileFeed feed = contactsService.query(query, ProfileFeed.class);
    List> currentMaps = (List>)memcacheService.get(memcacheKey);
    for(ProfileEntry entry:feed.getEntries()){
        //secret sauce: convert entry into csv column header/value map
    //store updated list of converted entry maps back into memcache
    memcacheService.put(memcacheKey, currentMaps);
        //start task to get next page of entries
        tasksService.fetchUserProfilesPageTask(spreadsheetTitle, loggedInEmailAddress, feed.getNextLink().getHref(), memcacheKey);
        //no more pages to retrieve, start task to publish csv

Exporting Profiles as a Google Docs Spreadsheet

One of the trickiest obstacles to work around in this effort is generating the Spreadsheet file since GAE restricts the ability to write to the File System.  The Spreadsheets Data API was one possibility we considered, but we ended up feeling it was a bit of overkill, having to first create the Spreadsheet using the Docs List API and then populate the Spreadsheet one record per request.  This could have generated thousands of requests to populate the entire Spreadsheet.  Instead, we leveraged an open source library to write csv file data directly to a byte array, and then sent the file data as the content of the newly created Docs List entry in a single request: 
public void saveRowsAsSpreadsheet(String spreadsheetTitle, String loggedInEmailAddress, String memcacheKey) {
    //get list of csv column header/value maps from cache:
    List> rows = (List>)memcacheService.get(memcacheKey);
    //secret sauce: convert csv maps into a byte arrray
    byte[] csvBytes = converter.getCsvBytes(rows);
    SpreadsheetEntry newDocument = new SpreadsheetEntry();
    MediaByteArraySource fileSource = new MediaByteArraySource(csvBytes,"text/csv");
    MediaContent content = new MediaContent();
    content.setMimeType(new ContentType( "text/csv"));
    //add MIME-Typed byte array as content to SpreadsheetEntry 
    //set title of SpreadsheetEntry
    newDocument.setTitle(new PlainTextConstruct(docTitle));
    URL feedUri = new URL(new StringBuilder(DOCS_URL).append('?').append(TWO_LEGGED_OAUTH_PARAM).append('=').append(userEmail).toString());
    //we are done, time to delete the data stored in cache
Once completed, this method results in a newly created, populated Google Docs Spreadsheet viewable in the logged-in user's Google Docs.

The one difference between these code examples and the function we've ended up with in production is that, for customers with over 1,000 Google Apps users, we have broken up Spreadsheet creation into worksheets of 1,000 rows per sheet instead of creating one Spreadsheet upon completion of retrieval of all of the User Profiles.  This was to avoid running into the default maximum object size ceiling in Memcache, the maximum Google Docs List API upload data size, and the Spreadsheet maximum cell number.  Tacking on this logic was not particularly complicated but ends up being quite a bit harder to follow as an example, so we left it out of this discussion.  

Though not explicitly covered in this post, despite best efforts, these remote calls are subject to all the instability associated with making calls across the web.  We continue to experience occasional timeouts and other infrequent network errors.  Fortunately, if designed to fail correctly, the Task Queue API automatically retries to execute the task until it succeeds.  Since the API is designed this way, we need to make sure that any unrecoverable errors are caught to avoid endless retries of "bad tasks."


This post demonstrates an approach to export a large set of Google Apps contact information into a Google Docs Spreadsheet.  Many similar long-running, divisible operations can use this same approach to spread work across a string of tasks queued in the GAE Task Queue.  Also, the same approach to writing Google Docs files could be used to publish a variety of types of reports from GAE.  We would love to hear what you think of this approach and if you have come up with your own solution for similar issues.

Thanks to the Cloud Sherpas team for authoring this post. Check out SherpaTools at

[Gd] Android at the Game Developer's Conference

| More

Android Developers Blog: Android at the Game Developer's Conference

Tuesday, March 9 marks the start of the 2010 Game Developers Conference in San Francisco, and Android will be there! There has been a lot of interest about Android from the game development community, and our presence at GDC is intended to provide developers everything they need to get started with the platform. We are hosting several technical sessions and participating in two industry panels.

We also want to meet you and answer your questions about Android game development, so we've set aside time for "office hours." Android team engineers will be on-hand to answer your questions, and if you have a game in development for Android, we'd love to see a demo.

Below, you can see the technical sessions that we're hosting and industry panels that we're participating in. We look forward to seeing you at GDC2010!

Technical sessions

Tuesday, March 9

Bootstrapping Games on Android
Chris Pruett
Everything you need to know about games on Android in 60 minutes.
1:45 PM - 2:45 PM
Room 309, South Hall

Wednesday, March 10

Bring Your Games to Android
Jack Palevich
An in-depth look at writing and porting C++ games using the NDK and a thin Java shell.
10:30 AM - 11:30 AM
Room 302, South Hall

Get the Most out of Android Media APIs
Dave Sparks & Jason Sams
Tips and tricks for optimizing your sound, video, and graphics for compatibility, efficiency, and battery life.
11:45 PM - 12:45 PM
Room 302, South Hall

Android Office Hours
The Android team
Come meet the team, ask us your questions, and show off your games!
3:00 PM - 4:00 PM
Room 302, South Hall

Industry panels

Wednesday, March 10

GamesBeat2010: A sea of mobile devices
Eric Chu
Industry experts weigh in on the future of mobile game development.
4:30 PM - 5:30 PM
Moscone Convention Center

Thursday, March 11

After the iPhone...what?
Dave Sparks
Audio experts discuss the nitty gritty technical details of alternative gaming platforms.
10:30 AM - 11:30 AM
Room 112, North Hall


[Gd] Google PowerMeter API introduced for device manufacturers

| More

Google Code Blog: Google PowerMeter API introduced for device manufacturers

Today we're excited to introduce the Google PowerMeter API on, for developers interested in integrating with Google PowerMeter. This API will allow device manufacturers to build home energy monitoring devices that work with Google PowerMeter. We're launching this API in order to help build the ecosystem of innovative developers working towards making energy information more widely available to consumers.

In today's launch of the API on we are highlighting the core design principles towards integrating with Google PowerMeter. In particular we outline the underlying data model and the accompanying protocols to ensure that Google PowerMeter provides consumers access to their energy consumption with utmost care in maintaining the user's privacy and control on access to the information. We also highlight, with code samples and client implementations, how to easily start building your PowerMeter-compatible device.

Tune into our blog and subscribe to our notification list for announcements on upcoming developments. We are thrilled to bring together a rich framework to help more developers integrate with Google PowerMeter with our open, standards-based API.  We are looking to expose expanded features of this framework to the developer community in the coming months.

Finally, we want your feedback! Ask questions, suggest topics, and share your stories. You can do this at the Developer Lounge section of the Google PowerMeter forum.

We hope you join us for the ride ahead.

By Srikanth Rajagopalan, Product Manager

Tuesday, March 2, 2010

[Gd] Sharing the verification love

| More

Official Google Webmaster Central Blog: Sharing the verification love

Webmaster Level: All

Everything is more fun with a friend! We've just added a feature to Webmaster Tools Site Verification to make it easier to share verified ownership of your websites.

In the past, if more than one person needed to be a verified owner of a website, they each had to go through the meta tag or HTML file verification process. That works fine for some situations, but for others it can be challenging. For example, what if you have twenty people who need to be verified owners of your site? Adding twenty meta tags or HTML files could be pretty time consuming. Our new verification delegation feature makes adding new verified owners a snap.

Once you're a verified owner of a website, you can view the Verification Details page (linked from Webmaster Tools or the Verification home page). That page will show you information about the site as well as a list of any other verified owners. At the bottom of the list of owners, you'll now see a button labeled "Add a user...". Click that, enter the user's email address, and that person will instantly become a verified owner for the site! You can remove that ownership at any time by clicking the "Unverify" link next to the person's email address on the Details page.

There are a few important things to keep in mind as you use this feature. First, each site must always have at least one owner who has verified directly (via meta tag or HTML file). If all of the directly verified owners become unverified, the delegated owners may also become unverified. Second, you can only delegate ownership to people with Google Accounts. Finally, remember that anyone you delegate ownership to will have exactly the same access you have. They can delegate to more people, submit URL Removal requests and manage Sitelinks in Webmaster Tools, etc. Only delegate ownership to people you trust!

We hope this makes things a little easier for those of you who need more than one person to be a verified owner of your site. As always, please visit the Webmaster Help Forum if you have any questions.

Sean Harding, Software Engineer