Saturday, October 1, 2011

[Gd] Building Integrated Apps for the Mobile Workforce

| More

Google Apps Developer Blog: Building Integrated Apps for the Mobile Workforce

The Google Apps Marketplace is a storefront for Google Apps customers to discover, purchase, deploy and manage web applications which are integrated with Google Apps. These applications are typically used from desktops and laptops, but many vendors on the Apps Marketplace have also optimized the experience for their users who are on-the-go. There are several different strategies for enabling a mobile workforce, and each requires a different approach to authentication and authorization.

Lightweight: Synchronize Contacts, Calendars and Docs with Google Apps

Google has written applications and synchronization clients to help ensure that the core Google Apps data is available to users on their mobile devices, whether they’re on their mobile phones or tablets. By storing contacts, dates and documents from your application in Google Apps using the application APIs, you can leverage these features to provide a mobile view for your users.

Since you’re only accessing the application APIs on your web application’s server, and the user has already linked up their mobile device to their Google account, there are no special techniques for authentication and authorization when using this lightweight approach.

Standards-based: Build a mobile-optimized web application

With the latest advances in HTML5 web technologies such as offline and local storage, it’s possible to build mobile interfaces for business apps which are full-featured and accessible to users on many devices. The primary goal in building the mobile web application is to optimize the user experience for different input devices, form factors and limitations in network availability and bandwidth.

Because the application is in a web browser, most of the changes to implement are in the frontend-- HTML, JavaScript and CSS. User authentication and data authorization continue to use the same OpenID and OAuth technologies as are used for the desktop/laptop version of the application.

Device-custom: Build native companion apps for mobile devices

Does your application need access to hardware-specific APIs which are not available in a web browser, or do you feel a great user experience can only be achieved using native code? Several Apps Marketplace vendors have built native applications for popular mobile platforms like Android and iOS. Although it takes considerably more effort to build multiple native applications to cover the major platforms, these vendors can also take advantage of the additional distribution channels offered by mobile stores.

Authentication and authorization are often challenging for developers building native mobile applications because they cannot simply ask users for a password if their app supports single-sign on to Google with OpenID. We recently published an article describing a technique using an embedded webview for accomplishing OpenID authentication in mobile apps. This article includes references to sample code for Android and iOS.

Many Project Management applications, like Manymoon, store important dates on Google Calendar. These dates are then available on mobile devices.

GQueues has a HTML5 mobile app. Their founder has written about why they used this technique.

Native applications, such as the OpenID Sample Store displayed, can use an embedded webview to authenticate users.

Ryan Boyd   profile | twitter | events

Ryan is a Developer Advocate on the Google Apps Marketplace team, helping businesses build applications integrated into Google Apps. Wearing both engineering and business development hats, you'll find Ryan writing code and helping businesses get to market with integrated features.

Want to weigh in on this topic? Discuss on Buzz

[Gd] Using Fusion Tables with Apps Script

| More

Google Apps Developer Blog: Using Fusion Tables with Apps Script

Editor’s Note: This post written by Ferris Argyle. Ferris is a Sales Engineer with the Enterprise team at Google, and had written fewer than 200 lines of JavaScript before beginning this application. --Ryan Boyd

I started with Apps Script in the same way many of you probably did: writing extensions to spreadsheets. When it was made available in Sites, I wondered whether it could meet our needs for gathering roadmap input from our sales engineering and enterprise deployment teams.

Gathering Roadmap Data

At Google, teams like Enterprise Sales Engineering and Apps Deployment interact with customers and need to share product roadmap ideas to Product Managers. Product Managers use this input to iterate and make sound roadmap decisions. We needed to build a tool to support this requirement. Specifically, this application would be a tool used to gather roadmap input from enterprise sales engineering and deployment teams, providing a unified way of prioritizing customer requirements and supporting product management roadmap decisions. We also needed a way to share actual customer use cases from which these requirements originated.

The Solution

This required bringing together the capabilities of Google Forms, Spreadsheets and Moderator in a single application: form-based user input, dynamically generated structured lists, and ranking.

This sounds like a fairly typical online transaction processing (OLTP) application, and Apps Script provides rich and evolving UI services, including the ability to create grids, event handlers, and now a WYSIWYG GUI Builder; all we needed was a secure, scalable SQL database backend.

One of my geospatial colleagues had done some great work on a demo using a Fusion Tables backend, so I did a little digging and found this example of how to use the APIs in Apps Script (thank you, Fusion Tables Developer Relations).

Using the CRUD Wrappers

Full sample code for this app is available and includes a test harness, required global variables, additional CRUD wrappers, and authorization and Fusion REST calls. It has been published to the Script Gallery under the title "Using Fusion Tables with Apps Script."

The CRUD Wrappers:

* Read records
* @param {string} tableId The Id of the Fusion Table in which the record will be created
* @param {string} selectColumn The Fusion table columns which will returned by the read
* @param {string} whereColumn The Fusion table column which will be searched to determine whether the record already exists
* @param {string} whereValue The value to search for in the Fusion Table selectColumn; can be '*'
* @return {string} An array containing the read records if no error; the bubbled return code from the Fusion query API if error
function readRecords_(tableId, selectColumn, whereColumn, whereValue) {

var query = '';
var foundRecords = [];
var returnVal = false;
var tableList = [];
var row = [];
var columns = [];
var rowObj = new Object();

if (whereValue == '*') {
var query = 'SELECT '+selectColumn+' FROM '+tableId;
} else {
var query = 'SELECT '+selectColumn+' FROM '+tableId+
' WHERE '+whereColumn+' = \''+whereValue+'\'';

var foundRecords = fusion_('get',query);

if (typeof foundRecords == 'string' &&'>> Error')>-1)
returnVal =;
} else if (foundRecords.length > 1 ) {
//first row is header, so use this to define columns array
row = foundRecords[0];
columns = [];
for (var k = 0; k < row.length; k++) {
columns[k] = row[k];

for (var i = 1; i < foundRecords.length; i++) {
row = foundRecords[i];
if( row.length > 0 ) {
//construct object with the row fields
rowObj = {};
for (var k = 0; k < row.length; k++) {
rowObj[columns[k]] = row[k];
//start new array at zero to conform with javascript conventions
tableList[i-1] = rowObj;
returnVal = tableList;

return returnVal;

Now all I needed were CRUD-type (Create, Read, Update, Delete) Apps Script wrappers for the Fusion Tables APIs, and I’d be in business. I started with wrappers which were specific to my application, and then generalized them to make them more re-usable. I’ve provided examples above so you can get a sense of how simple they are to implement.

The result is a dynamically scalable base layer for OLTP applications with the added benefit of powerful web-based visualization, particularly for geospatial data, and without the traditional overhead of managing tablespaces.

I’m a Fusion tables beginner, so I can’t wait to see what you can build with Apps Script and Fusion Tables. You can get started here: Importing data into Fusion Tables, and Writing a Fusion Tables API Application.


  • Fusion Tables is protected by OAuth.This means that you need to authorize your script to access your tables. The authorization code uses “anonymous” keys and secrets: this does NOT mean that your tables are available anonymously.
  • Some assumptions were made in the wrappers which you may wish to change to better match your use case:
    • key values are unique in a table
    • update automatically adds a record if it’s not already there, and automatically removes duplicates
  • Characters such as apostrophes in the data fields will be interpreted as quotation marks and cause SQL errors: you’ll need to escape these to avoid issues.
  • About”) and column names to construct your queries

Ferris Argyle

Ferris is a Sales Engineer with the Enterprise team at Google.                                                       


[Gd] Documents List API Best Practices: Sharing Multiple Documents Using Collections

| More

Google Apps Developer Blog: Documents List API Best Practices: Sharing Multiple Documents Using Collections

Google Docs supports sharing collections and their contents with others. This allows multiple Google Docs resources to be shared at once, and for additional resources added to the collection later to be automatically shared., an EDU application on the Google Apps Marketplace, uses this technique. When a professor creates a new course, the application automatically creates a Google Docs collection for that course and shares it with all the students. This gives the students and professor a single place to go in Google Docs to access and manage all of their course files.

A collection is a Google Docs resource that contains other resources, typically behaving like a folder on a file system.

A collection resource is created by making an HTTP POST to the feed link with the category element’s term set to, for example:

<?xml version='1.0' encoding='UTF-8'?>
<entry xmlns="">
<category scheme=""
<title>Example Collection</title>

To achieve the same thing using the Python client library, use the following code:

from import Resource

collection = Resource('folder')
collection.title.text = 'Example Collection'

# client is an Authorized client
collection = client.create_resource(entry)

The new collection returned has a content element indicating the URL to use to add new resources to the collection. Resources are added by making HTTP POST requests to this URL.

type="application/atom+xml;type=feed" />

This process is simplified in the client libraries. For example, in the Python client library, resources can be added to the new collection by passing the collection into the create_resource method for creating resources, or the move_resource method for moving an existing resource into the collection, like so:

# Create a new resource of document type in the collection
new_resource = Resource(type='document', title='New Document')
client.create_resource(new_resource, collection=collection)

# Move an existing resource
client.move_resource(existing_resource, collection=collection)

Once resources have been added to the collection, the collection can be shared using ACL entries. For example, to add the user as a writer to the collection and every resource in the collection, the client creates and adds the ACL entry like so:

from import AclScope, AclRole
from import AclEntry

acl = AclEntry(
scope = AclScope(value='', type='user'),
role = AclRole(value='writer')

client.add_acl_entry(collection, acl)

The collection and its contents are now shared, and this can be verified in the Google Docs user interface:

Note: if the application is adding more than one ACL entry, it is recommended to use batching to combine multiple ACL entries into a single request. For more information on this best practice, see the latest blog post on the topic.

The examples shown here are using the raw protocol or the Python client library. The Java client library also supports managing and sharing collections.

For more information on how to use collections, see the Google Documents List API documentation. You can also find assistance in the Google Documents List API forum.

Ali Afshar profile | twitter

Ali is a Developer Programs engineer at Google, working on Google Docs and the Shopping APIs which help shopping-based applications upload and search shopping content. As an eternal open source advocate, he contributes to a number of open source applications, and is the author of the PIDA Python IDE. Once an intensive care physician, he has a special interest in all aspects of technology for healthcare.


[Gd] Python, OAuth 2.0 & Google Data APIs

| More

Google Apps Developer Blog: Python, OAuth 2.0 & Google Data APIs

Since March of this year, Google has supported OAuth 2.0 for many APIs, including Google Data APIs such as Google Calendar, Google Contacts and Google Documents List. Google's implementation of OAuth 2.0 introduces many advantages compared to OAuth 1.0 such as simplicity for developers and a more polished user experience.

We’ve just added support for this authorization mechanism to the gdata-python-client library-- let’s take a look at how it works by retrieving an access token for the Google Calendar and Google Documents List APIs and listing protected data.

Getting Started

First, you will need to retrieve or sync the project from the repository using Mercurial:

hg clone

For more information about installing this library, please refer to the Getting Started With the Google Data Python Library article.

Now that the client library is installed, you can go to your APIs Console to either create a new project, or use information about an existing one from the API Access pane:

Getting the Authorization URL

Your application will require the user to grant permission for it to access protected APIs on their behalf. It must redirect the user over to Google's authorization server and specify the scopes of the APIs it is requesting permission to access.

Available Google Data API’s scopes are listed in the Google Data FAQ.

Here's how your application can generate the appropriate URL and redirect the user:

import gdata.gauth

# The client id and secret can be found on your API Console.

# Authorization can be requested for multiple APIs at once by specifying multiple scopes separated by # spaces.
SCOPES = ['', '']

# Save the token for later use.
token = gdata.gauth.OAuth2Token(
client_id=CLIENT_ID, client_secret=CLIENT_SECRET, scope=' '.join(SCOPES),

# The “redirect_url” parameter needs to match the one you entered in the API Console and points
# to your callback handler.

If all the parameters match what has been provided by the API Console, the user will be shown this dialog:

When an action is taken (e.g allowing or declining the access), Google’s authorization server will redirect the user to the specified redirect URL and include an authorization code as a query parameter. Your application then needs to make a call to Google’s token endpoint to exchange this authorization code for an access token.

Getting an Access Token

import atom.http_core

url = atom.http_core.Uri.parse_uri(self.request.uri)
if 'error' in url.query:
# The user declined the authorization request.
# Application should handle this error appropriately.
# This is the token instantiated in the first section.

The redirect handler retrieves the authorization code that has been returned by Google’s authorization server and exchanges it for a short-lived access token and a long-lived refresh token that can be used to retrieve a new access token. Both access and refresh tokens are to be kept private to the application server and should never be revealed to other client applications or stored as a cookie.

To store the token object in a secured datastore or keystore, the gdata.gauth.token_to_blob() function can be used to serialize the token into a string. The gdata.gauth.token_from_blob() function does the opposite operation and instantiate a new token object from a string.

Calling Protected APIs

Now that an access token has been retrieved, it can be used to authorize calls to the protected APIs specified in the scope parameter.

import gdata.calendar.client

# Access the Google Calendar API.
calendar_client = gdata.calendar.client.CalendarClient(source=USER_AGENT)
# This is the token instantiated in the first section.
calendar_client = token.authorize(calendar_client)
calendars_feed = client.GetCalendarsFeed()
for entry in calendars_feed.entry:
print entry.title.text

# Access the Google Documents List API.
docs_client =
# This is the token instantiated in the first section.
docs_client = token.authorize(docs_client)
docs_feed = client.GetDocumentListFeed()
for entry in docs_feed.entry:
print entry.title.text

For more information about OAuth 2.0, please have a look at the developer’s guide and let us know if you have any questions by posting them in the support forums for the APIs you’re accessing.

Alain Vongsouvanh profile | events

Alain is a Developer Programs Engineer for Google Apps with a focus on Google Calendar and Google Contacts. Before Google, he graduated with his Masters in Computer Science from EPITA, France.

Updated 9/30/2011 to fix a small typo in the code


[Gd] Documents List API Best Practices: Resumable Upload

| More

Google Apps Developer Blog: Documents List API Best Practices: Resumable Upload

There are a number of ways to add resources to your Google Documents List using the API. Most commonly, clients need to upload an existing resource, rather than create a new, empty one. Legacy clients may be doing this in an inefficient way. In this post, we’ll walk through why using resumable uploads makes your client more efficient.

The resumable upload process allows your client to send small segments of an upload over time, and confirm that each segment arrived intact. This has a number of advantages.

Resumable uploads have a customizable memory footprint on client systems

Since only one small segment of data is sent to the API at a time, clients can store less data in memory as they send data to the API. For example, consider a client uploading a PDF via a regular, non-resumable upload in a single request. The client might follow these steps:

  1. Open file pointer to PDF
  2. Pass file pointer and PDF to client library
  3. Client library starts request
  4. Client library reads 100,000 bytes and immediately sends 100,000 bytes
  5. Client library repeats until all bytes sent
  6. Client library returns response

But that 100,000 bytes isn’t a customizable value in most client libraries. In some environments, with limited memory, applications need to choose a custom chunk size that is either smaller or larger.

The resumable upload mechanism allows for a custom chunk size. That means that if your application only has 500KB of memory available, you can safely choose a chunk size of 256KB.

Resumable uploads are reliable even though a connection may not be

In the previous example, if any of the bytes fail to transmit, this non-resumable upload fails entirely. This often happens in mobile environments with unreliable connections. Uploading 99% of a file, failing, and restarting the entire upload creates a bad user experience. A better user experience is to resume and upload only the remaining 1%.

Resumable uploads support larger files

Traditional non-resumable uploads via HTTP have size limits depending on both the client and server systems. These limits are not applicable to resumable uploads with reasonable chunk sizes, as individual HTTP requests are sent for each chunk of a file. Since the Documents List API now supports file sizes up to 10GB, this is very important.

Resumable upload support is already in the client libraries for Google Data APIs

The Java, Python, Objective-C, and .NET Google Data API client libraries all include a mechanism by which you can initiate a resumable upload session. Examples of uploading a document with resumable upload using the client libraries is detailed in the documentation. Additionally, the new Documents List API Python client library now uses only the resumable upload mechanism. To use that version, make sure to follow these directions.

Vic Fryzel   profile | twitter | blog

Vic is a Google engineer and open source software developer in the United States. His interests are generally in the artificial intelligence domain, specifically regarding neural networks and machine learning. He's an amateur robotics and embedded systems engineer, and also maintains a significant interest in the security of software systems and engineering processes behind large software projects.


Friday, September 30, 2011

[Gd] Announcing the next AdWords API Workshops in October / November

| More

AdWords API Blog: Announcing the next AdWords API Workshops in October / November

The Google Developer Relations team will be holding their next series of semi-annual AdWords API Workshops in October and November. These workshops will focus on new and upcoming features in the AdWords API, as well as best practices for special topics.

We invite you to join us for our next AdWords API Workshops which will be hosted in the following cities:

  • London, October 17th
  • San Francisco, October 18th
  • Hamburg, October 19th
  • Amsterdam, October 21st
  • New York City, October 25th
  • Tokyo, November 8th - New Location added this year
  • Singapore, November 11th - New Location added this year

In addition to presenting technical deep-dives on the topics listed below, we will also have the team on-hand to answer all your API-related questions, and also plan to give attendees a sneak peak and an opportunity to beta test a new programming feature being rolled out in the AdWords Frontend.

Workshop topics include:

  • Report Service Updates
  • AdWords API Authentication & Authorization Updates
  • Efficient API Usage with the Simplified Mutate Job Service
  • Campaign Targeting Changes
  • Mobile Best Practices

All events will have the same agenda, and will run from approximately 10:00AM - 3:00PM (local time). These workshops are geared towards software engineers who are already familiar with the AdWords API. Each session will focus on writing code and there will be no non-technical track.

For more information and to register, visit:

-- Sumit Chandel, AdWords API Team


[Gd] Fridaygram: Dead Sea Scrolls online, monument climbing, dinosaur feathers

| More

The official Google Code blog: Fridaygram: Dead Sea Scrolls online, monument climbing, dinosaur feathers

Author Photo
By Scott Knaster, Google Code Blog Editor

The Dead Sea Scrolls were lost in the Judean desert for more than 2000 years before being rediscovered in 1947. Now The Digital Dead Sea Scrolls project makes five of the ancient documents available online to everyone.

The online scrolls contain incredibly high-resolution photography (up to 1200 megapixels) and an English translation along with the original Hebrew text. Looking through the scrolls online is a remarkable mashup of ancient artifacts and modern technology.

Not everything can be done online: sometimes you need to be there. When a magnitude 5.8 earthquake struck near Washington, D.C. last August, the Washington Monument suffered visible damage. This week the U. S. National Park Service sent its "difficult access team" to rappel up and down the monument to check for damage. Civil Engineer Emma Cardini seemed to enjoy the task and was quoted as saying "It’s really cool to see the planes flying under you". See, that’s why it’s great to be an engineer.

Birds fly, too – but dinosaurs with feathers? Check out this news from Canada about the discovery of amber-bound feathers that belonged to dinosaurs and birds from the late Cretaceous period.

Fridaygram is a weekly post containing a cool Google-related announcement and a couple of fun science-based tidbits. But no cake.

Thursday, September 29, 2011

[Gd] YouTube API and the News

| More

YouTube API Blog: YouTube API and the News

If you love to follow the news as much as you love to code, then Hacks/Hackers, an international organization that sits at the nexus of journalism and technology, is for you. Its mission is to create a network of journalists (“hacks”) and technologists (“hackers”) to rethink the future of news and information. Recently, YouTube and LinkTV hosted a Hacks/Hackers meetup at Google’s San Francisco office. Together with four developer partners, we demoed web applications used by reporters and built using the YouTube API.  The presentation started with a YouTube API overview, followed by demos of the following:
  • YouTube Direct is an open source, user-generated content video submission and moderation platform.
  • Storyful was founded by journalists to discover the smartest conversations about world events and raise up the authentic voices on the big stories.
  • Storify lets users make stories using social media. With Storify you can drag-and-drop tweets, YouTube videos, Flickr images, Facebook updates, etc. and add your own narrative to tell a story.
  • Shortform is a new social entertainment medium, delivering continuous channels containing the best videos from anywhere on the web, curated by our community of video DJs (VJs).
  • GoAnimate was founded to provide an outlet for everyone's creative ideas. In just 10 minutes, you can make fun animated videos without having to draw.
  • Link TV recently launched Link News, an international news website that sifts through YouTube's library of news content to deliver breaking news and hidden stories to a wider audience. 
We would like to share the video recording of the event with you so that you can learn more. If you want to discover more about Hacks/Hackers, you can find the list of local chapters here.  

—Jarek Wilkiewicz, YouTube API Team

[Gd] Android’s HTTP Clients

| More

Android Developers Blog: Android’s HTTP Clients

Jesse Wilson

[This post is by Jesse Wilson from the Dalvik team. —Tim Bray]

Most network-connected Android apps will use HTTP to send and receive data. Android includes two HTTP clients: HttpURLConnection and Apache HTTP Client. Both support HTTPS, streaming uploads and downloads, configurable timeouts, IPv6 and connection pooling.

Apache HTTP Client

DefaultHttpClient and its sibling AndroidHttpClient are extensible HTTP clients suitable for web browsers. They have large and flexible APIs. Their implementation is stable and they have few bugs.

But the large size of this API makes it difficult for us to improve it without breaking compatibility. The Android team is not actively working on Apache HTTP Client.


HttpURLConnection is a general-purpose, lightweight HTTP client suitable for most applications. This class has humble beginnings, but its focused API has made it easy for us to improve steadily.

Prior to Froyo, HttpURLConnection had some frustrating bugs. In particular, calling close() on a readable InputStream could poison the connection pool. Work around this by disabling connection pooling:

private void disableConnectionReuseIfNecessary() {
// HTTP connection reuse which was buggy pre-froyo
if (Integer.parseInt(Build.VERSION.SDK) < Build.VERSION_CODES.FROYO) {
System.setProperty("http.keepAlive", "false");

In Gingerbread, we added transparent response compression. HttpURLConnection will automatically add this header to outgoing requests, and handle the corresponding response:

Accept-Encoding: gzip

Take advantage of this by configuring your Web server to compress responses for clients that can support it. If response compression is problematic, the class documentation shows how to disable it.

Since HTTP’s Content-Length header returns the compressed size, it is an error to use getContentLength() to size buffers for the uncompressed data. Instead, read bytes from the response until returns -1.

We also made several improvements to HTTPS in Gingerbread. HttpsURLConnection attempts to connect with Server Name Indication (SNI) which allows multiple HTTPS hosts to share an IP address. It also enables compression and session tickets. Should the connection fail, it is automatically retried without these features. This makes HttpsURLConnection efficient when connecting to up-to-date servers, without breaking compatibility with older ones.

In Ice Cream Sandwich, we are adding a response cache. With the cache installed, HTTP requests will be satisfied in one of three ways:

  • Fully cached responses are served directly from local storage. Because no network connection needs to be made such responses are available immediately.

  • Conditionally cached responses must have their freshness validated by the webserver. The client sends a request like “Give me /foo.png if it changed since yesterday” and the server replies with either the updated content or a 304 Not Modified status. If the content is unchanged it will not be downloaded!

  • Uncached responses are served from the web. These responses will get stored in the response cache for later.

Use reflection to enable HTTPS response caching on devices that support it. This sample code will turn on the response cache on Ice Cream Sandwich without affecting earlier releases:

private void enableHttpResponseCache() {
try {
long httpCacheSize = 10 * 1024 * 1024; // 10 MiB
File httpCacheDir = new File(getCacheDir(), "http");
.getMethod("install", File.class, long.class)
.invoke(null, httpCacheDir, httpCacheSize);
} catch (Exception httpResponseCacheNotAvailable) {

You should also configure your Web server to set cache headers on its HTTP responses.

Which client is best?

Apache HTTP client has fewer bugs on Eclair and Froyo. It is the best choice for these releases.

For Gingerbread and better, HttpURLConnection is the best choice. Its simple API and small size makes it great fit for Android. Transparent compression and response caching reduce network use, improve speed and save battery. New applications should use HttpURLConnection; it is where we will be spending our energy going forward.


[Gd] Work smarter, not harder, with site health

| More

Official Google Webmaster Central Blog: Work smarter, not harder, with site health

Webmaster level: All

We consistently hear from webmasters that they have to prioritize their time. Some manage dozens or hundreds of clients’ sites; others run their own business and may only have an hour to spend on website maintenance in between managing finances and inventory. To help you prioritize your efforts, Webmaster Tools is introducing the idea of “site health,” and we’ve redesigned the Webmaster Tools home page to highlight your sites with health problems. This should allow you to easily see what needs your attention the most, without having to click through all of the reports in Webmaster Tools for every site you manage.

Here’s what the new home page looks like:

You can see that sites with health problems are shown at the top of the list. (If you prefer, you can always switch back to listing your sites alphabetically.) To see the specific issues we detected on a site, click the site health icon or the “Check site health” link next to that site:

This new home page is currently only available if you have 100 or fewer sites in your Webmaster Tools account (either verified or unverified). We’re working on making it available to all accounts in the future. If you have more than 100 sites, you can see site health information at the top of the Dashboard for each of your sites.

Right now we include three issues in your site’s health check:

  1. Have we detected malware on the site?
  2. Have any important pages been removed via our URL removal tool?
  3. Are any of your important pages blocked from crawling in robots.txt?

You can click on any of these items to get more details about what we detected on your site. If the site health icon and the “Check site health” link don’t appear next to a site, it means that we didn’t detect any of these issues on that site (congratulations!).

A word about “important pages:” as you know, you can get a comprehensive list of all URLs that have been removed by going to Site configuration > Crawler access > Remove URL; and you can see all the URLs that we couldn’t crawl because of robots.txt by going to Diagnostics > Crawl errors > Restricted by robots.txt. But since webmasters often block or remove content on purpose, we only wanted to indicate a potential site health issue if we think you may have blocked or removed a page you didn’t mean to, which is why we’re focusing on “important pages.” Right now we’re looking at the number of clicks pages get (which you can see in Your site on the web > Search queries) to determine importance, and we may incorporate other factors in the future as our site health checks evolve.

Obviously these three issues—malware, removed URLs, and blocked URLs—aren’t the only things that can make a website “unhealthy;” in the future we’re hoping to expand the checks we use to determine a site’s health, and of course there’s no substitute for your own good judgment and knowledge of what’s going on with your site. But we hope that these changes make it easier for you to quickly spot major problems with your sites without having to dig down into all the data and reports.

After you’ve resolved any site health issues we’ve flagged, it will usually take several days for the warning to disappear from your Webmaster Tools account, since we have to recrawl the site, see the changes you’ve made, and then process that information through our Web Search and Webmaster Tools pipelines. If you continue to see a site health warning for that site after a week or so, the issue may not have been resolved. Feel free to ask for help tracking it down in our Webmaster Help Forum... and let us know what you think!

Posted by , Webmaster Trends Analyst


[Gd] Coding with data from our Transparency Report

| More

The official Google Code blog: Coding with data from our Transparency Report

Author Picture
By Matt Braithwaite, Transparency Engineering Tech Lead

More than a year ago, we launched our Transparency Report, which is a site that shows the availability of Google services around the world and lists the number of requests we’ve received from governments to either hand over data or to remove content. We wanted to provide a snapshot of government actions on the Web — and in recent cases like Libya and Myanmar, we were glad to see users start to get back on our services.

Today, we’re releasing the raw data behind our Government Requests tool in CSV format. Interested developers and researchers can take this data and revisualize it in different ways, or mash it up with information from other organizations to test and draw up new hypotheses about government behaviors online. We’ll keep these files up-to-date with each biannual data release. We’ve already seen some pretty cool visualizations of this data, despite the lack of a machine-readable version, but we figure that easier access can only help others to find new trends and make new inferences.

The data has grown complex enough that we can no longer build a UI that anticipates every question you might want to ask. For example, the Transparency Report doesn’t allow you to ask the question, "Which Google products receive the greatest number of removal requests across all countries?" Using Google Fusion Tables you can answer that question easily. (The top four are Google Web Search, YouTube, orkut, and Blogger.)

We believe it’s important to keep providing data to anchor policy conversations about Internet access and censorship with real facts — and we’ll continue to add more raw data and APIs to the Transparency Report in the future. So much can be done when engineers and policy wonks come together to talk about the future of the Internet, and we’re psyched to see the graphs, mashups, apps, and other great designs people come up with.

To kick things off, we’re sponsoring a forum to demonstrate the power of what can happen when engineering and policy work together. If you're an EU-based hacker, we invite you to apply to join us for an all-expenses-paid hackathon using this data at the EU Parliament in Brussels on November 8-9, 2011.

Matt Braithwaite is the Tech Lead for Google's Chicago-based Transparency Engineering team. He has a beard (not shown).

Posted by Scott Knaster, Editor


[Gd] What Does It Mean To Be A Google Developer? Share Your Story

| More

The official Google Code blog: What Does It Mean To Be A Google Developer? Share Your Story

Author PictureBy Amy Walgenbach, Google Developer Marketing

Our developer program started in 2005 with a handful of APIs and developer advocates. Fast forward to today: Google offers over 100 APIs, dozens of developer tools, and a raft of developer advocates around the world. Obviously, a lot has changed and the Web has matured significantly. Google has also evolved and matured, and we felt that it was time to step back and rethink how we interact with and support our developer community. We believe we can make it easier to find what you’re looking for, and facilitate connections with others in the Google Developer community. We know we can do better and we want your input so that we can understand your needs — and what drives you — better.

Now we want to hear from you.

We want to know what inspires you as a developer and how Google can support you. What does being a Google developer mean to you? Tell us what’s important to you and how we can make your experience as a Google developer better. Like any good open source project, the Google developers project needs your contributions. Share your story so we can we better support your success — and we may just pick you to be featured.

You can add a video (it's easy, really!) directly from the page, on your mobile phone, or write to us here. However you share with us, we’re looking forward to hearing what you have to say.

Amy Walgenbach is the Product Marketing lead for the Google+ platform and leads developer marketing for games at Google.

Posted by Ashleigh Rentz, Editor Emerita

[Gd] Project WOW

| More

Google App Engine Blog: Project WOW

Today's post is contributed by Edward Hartwell Goose of PA Consulting, who is working on an App for the UK’s Met Office to report everyone's favorite bit of small talk, the weather. We hope you find the discussion of his team's experience using App Engine illuminating.

The UK’s Met Office is one of the world’s leading organisations in weather forecasting, providing reports throughout the day for the UK and the rest of the world. The weather information they provide is consumed by a variety of industries from shipping to aircraft, and powers some of the UK’s leading media organisations, such as the BBC.

Although the Met Office is the biggest provider of weather data, they aren’t the only ones collecting information. Thousands of enthusiasts worldwide collect their own weather data, from a wide variety of weather stations - either simple temperature sensors or highly sophisticated stations that rival the Met Office’s own equipment. The question of course is: how do you harness the power of this crowd?

Enter the Weather Observations Website

The Met Office and our team from PA Consulting worked together to answer this question late last year. The end result was The Weather Observations Website, or “WOW” for short. In the 3 months since launch on the 1st June 2011, WOW has recorded 5.5 million weather reports from countries throughout the world. Furthermore, we can retrieve current reports in sub second times, providing a real time map of worldwide weather. We haven’t got the whole globe covered just yet and the UK carries the most sites, but most countries in Western Europe are reporting. We also have reports from a medley of countries throughout the world, from Mauritius, Brazil and as far away as New Zealand. We even have one site reporting at regular intervals in Oman (it’s hot!).

Better yet, as a development team of 2, since launch, we’ve spent almost no time at all doing anything but casual monitoring of WOW. No one carries a pager, and the one time we did have problems (we underestimated demand, and our quota ran out), I was able to upgrade the quota in a minute or so. And I did it from my sofa. On my phone.

How good is that?

WOW - Showing Live Temperature Data Across Europe

Lessons Learnt Building for App Engine

We learnt a lot building WOW. A huge amount in fact. And we’d love to share our insights with you. We’d also love to tell you what we love about App Engine - and why you should use it too. And so you know we’re honest - we’ll tell you what we don’t like too!

Firstly - the good stuff. We think App Engine is a fantastic tool for prototyping and any team working in an agile environment. We’re big fans of SCRUM at PA Consulting, and App Engine was a dream to work with. Compared to some of our colleagues working with difficult build procedures and environments, our full release procedure never took more than 5 minutes. Better yet, App Engine’s deployment tools all hook up with ANT and CruiseControl (our continuous build system), allowing us to run unit tests and deploy new code on every check-in to our code repository. This allowed us as developers to get on with what we do best: develop!

The APIs are great too. The documentation is fantastic and is regularly updated. We use all but the Channel API at the moment (sadly, it uses Google Talk Infrastructure behind the scenes and this gets blocked by some corporate environments). If I could offer any advice to a budding App Engine developer it would be to thoroughly read the documentation, and place the following three questions and their solutions at the forefront of everything you do:

        1. Can I do it in the background? - Task Queue API

        2. Can I use the low-level API? - Datastore API

        3. Can I cache the result? - Memcache API

These three principles have given WOW the performance it has, and will allow it to scale effectively over the coming years. We’ve got some big changes coming over the next couple of months too that should provide even higher performance as well as exciting new features.

Future Developments

So, how about improvements App Engine could make? There are definitely a few, although how important they are will depend on the problems you’re trying to solve.

To begin with, improvements to some of the non technical elements are needed before App Engine becomes truly mainstream. Needing a credit card to pay is a showstopper for some organisations. Backup and restore is also missing, unless you implement it yourself. This is perfectly possible, but can add significant man days to your development effort if your data structure is complex and fast changing.

We also struggled with how to estimate quota to begin with too. One of the brilliant features of App Engine is how easy it is to spool up (and down) new instances to deal with demand. Unfortunately, this also means it can be quite easy to accidentally spool up too many instances and burn quota quickly. Although this has never affected the actual data, it can cause an unpleasant spike in the amount of dollars spent. We also had a similar problem with a MapReduce job getting stuck overnight that caused a scare the next morning. Hopefully the new monitoring API should provide a bit more visibility of these issues as well as automatic email notifications to help catch these issues.

Aside from that, other features will probably depend on the application you’re trying to build. Built in support for geo-queries would be invaluable for WOW. Currently we use an external library, but this adds some extra overhead on our development. Another common feature request is full text search which is essential for projects dealing with large text corpa. Both of these features would allow us to provide better search facilities for our users - for example search by site name or geographic location. These queries can be implemented in App Engine as it is now, but achieving optimal performance and optimal cost are difficult problems that we struggle to complete ourselves.

Final Thoughts

Overall, we’re really impressed by App Engine. The App Engine team regularly releases new versions, and although it does have limitations it has allowed us to concentrate on what really matters to us - the weather. We know WOW will scale without any problems, and we don’t have to worry about any of the hardware configuration or system administration that can easily consume time. Our small team of developers spends all of their time understanding the business problems and improving WOW.

We’re really looking forward to taking WOW forward in the future, we hope you can join us:

Edward Hartwell Goose (@edhgoose)



[Gd] Monetizing games with In-App Payments

| More

The official Google Code blog: Monetizing games with In-App Payments

This guest post was written by Beau Harrington, Senior Development Director, Kabam

Cross-posted with the Google Commerce Blog

Kabam was part of the initial launch of Google+ Games with two game titles, Dragons of Atlantis and Edgeworld, and we recently added Global Warfare. For these games, we integrated Google In-App Payments and we’re pleased with our games’ monetization to date. There are a couple things we learned along the way that we’re happy to share with the community.

Integrating In-App Payments

Integrating In-App Payments in our games was very simple, especially when compared to other payment platforms. There is excellent documentation available, complete with examples for each step of the purchase flow. We also used open-source libraries such as ruby-jwt to generate the tokens required for each purchase option.

We designed our games and purchase pages around the expectation of instant feedback, making sure to incorporate page loads or refreshes wherever possible. For example, in Edgeworld, a player attacking an enemy base can load the list of Platinum options instantly, without waiting for the list of payment options to load. After their Platinum purchase, the player is immediately brought back to the game, with their new currency and items waiting for them.

Pro tip: strive to reduce purchaser friction

One of the keys to maximizing revenue is to remove as much friction as possible from the purchase flow, making sure as many people as possible get from one step of the flow to the next. Many payment platforms send players to their own website and multi-page checkout flow. The Google In-App Payments approach allows us to keep players on our game page for the entire flow, making sure we can manage more of the process and reduce abandonment.

Additionally, the player's credit card information is stored securely, so once a player has made a purchase anywhere using In-App Payments, their information is available for future purchases without additional data entry. Finally, JavaScript callbacks provided by In-App Payments allow us to show the effects of the purchase immediately, improving customer satisfaction.

General recommendations

For those experienced in this space, the following may seem rudimentary. At the same time, I’d be remiss not to include these recommendations as they are important to developing a successful game payments system:
  • Make sure your payment flow is as seamless as possible, never giving the player the opportunity to get bored waiting for something to load. 
  • Record and monitor each step of the payment flow in order to identify potential problems. 
  • Run A/B tests on your purchase option page to optimize the number of players who make a purchase, as well as the amount of the average purchase. 
We are proud to be among the first companies on Google’s exciting new monetization platform, and we look forward to the continuing growth in features, functionality and developer tools.

Beau Harrington is Senior Development Director of Kabam

Posted by Scott Knaster, Editor

Wednesday, September 28, 2011

[Gd] Dev Channel Updates for Chromebooks

| More

Google Chrome Releases: Dev Channel Updates for Chromebooks

The Dev channel has been updated to 15.0.874.51 (Platform version: 1011.43) for Chromebooks(Acer AC700, Samsung Series 5, and Cr-48).

  • Web UI login network fixes
  • Web UI login accessibility fixes
  • Fix several functionality and stability issues
Known issues:
  • 19931 gmail : rendering issue seen on scrolling down long email thread .
  • 20204: Gobi 3K activation fails and displays error page on Chrome OS
  • 20525: Gobi 2K shows error for the first time activation before zip code page
  • 20264: Gobi 2K activation is successful,but throwing error
  • 19421: 3G activation is taking longer time with Gobi 3K and 2K modems
If you find new issues, please let us know by visiting our help site or filing a bug. You can also submit feedback using "Report an issue" under the wrench icon. Interested in switching to the Beta channel? Find out how.

Josafat Garcia
Google Chrome

[Gd] The Beta channel has been updated to 15.0.874.51 for Windows,

| More

Google Chrome Releases:
The Beta channel has been updated to 15.0.874.51 for Windows,

The Beta channel has been updated to 15.0.874.51 for Windows, Mac, Linux, and ChromeFrame platforms

  • Updated V8 -
  • Several crash fixes (including 96727, 93314, 97165, 96282)
  • Intranet URLs don't inline autocomplete (Issue 94805)
  • The New Tab Page bookmark pane has been reverted to the detached bar pending future improvements to the pane version. Thanks for all the feedback! (Issue: 92609)
  • Only show NTP4 info bubble for upgrading users (Issue 97103)
  • Sync not enforcing server legal bookmark names when migrating to new specifics (Issue 96623)
  • Fixed wrench menu bottom border truncated in Win 7 32-bit (Issue: 96505)
  • Native Client startup fixed for 32-bit Linux (Issue 92964)
  • Fixed fetching proxy settings on Gnome 3 systems when glib2-dev package is not installed (Issue 91744)

More details about additional changes are available in the svn: log of all revisions.

You can find out about getting on the Beta channel here:

If you find new issues, please let us know by filing a bug at

Karen Grunberg
Google Chrome

[Gd] It’s now easier to set up Google Analytics Site Search tracking for your Custom Search Engine

| More

Custom Search Engine: It’s now easier to set up Google Analytics Site Search tracking for your Custom Search Engine

Google Analytics Site Search reports provide extensive data on how people search your site once they are already on it.  You can see initial searches, refinements, search trends, which pages they searched from, where they ended up, and conversion correlation.  In the past we admit that setup was a little challenging, but we’re happy to announce that now we’ve made it easy to setup Site Search tracking directly from your Custom Search Engine.

If you are already a Google Analytics user (and your site has the Google Analytics tracking code on its pages), go to the Custom Search Engine management page, select your CSE’s control panel and click on Google Analytics from the left-hand menu.  We’ll display a list of your Google Analytics web properties so you can select one and tell us the query and category parameters that you want to track.

Once you save your changes, we’ll generate a new code snippet.  Copy it from the Get Code page, paste it into your site and setup is complete!



 You can then access Site Search reports from the Content section of Google Analytics.



Happy analyzing!  If needed, you can find help with setup here and an explanation of the differences between Google Analytics and Custom Search statistics here. Let us know what you think in our discussion forum.

Posted by: Zhong Wang, Software Engineer


Tuesday, September 27, 2011

[Gd] Integrate Google Web Font selection into your apps

| More

The official Google Code blog: Integrate Google Web Font selection into your apps

By Jeremie Lenfant-Engelmann, Google Web Fonts Engineer

We’ve received lots of requests from developers for a dynamic feed of the most recent web fonts offered via Google Web Fonts. Such a feed would ensure that you can incorporate Google Web Fonts into applications and menus dynamically, without the need to hardcode any URLs. The benefits of this approach are clear. As Google Web Fonts continues to add fonts, these fonts can become immediately available within your applications and sites.

To address this need, we’ve built the Google Web Fonts Developer API, which provides a list of fonts offered via Google Web Fonts. Results can be sorted by alpha, date added, popularity, number of styles available, and trending (which is a measure of fonts growing rapidly in usage). Check out the documentation to get started.

Some developers have helped us test this new API over the last few months, and the results are already public. Take a look at TypeDNA’s photoshop plugin as well as Faviconist, an app that makes generating favicons as simple as can be, and Google Web Fonts Families, a list of Google Web Fonts that have more than one style.

We look forward to seeing what you come up with!

Jeremie Lenfant-Engelmann is a Software Engineer on the Google Web Fonts team.

Posted by Scott Knaster, Editor