Friday, November 13, 2009

[Gd] Help Google index your mobile site

| More

Official Google Webmaster Central Blog: Help Google index your mobile site

(This post was largely translated from our Japanese Webmaster Central Blog.)

It seems the world is going mobile, with many people using mobile phones on a daily basis, and a large user base searching on Google’s mobile search page. However, as a webmaster, running a mobile site and tapping into the mobile search audience isn't easy. Mobile sites not only use a different format from normal desktop site, but the management methods and expertise required are also quite different. This results in a variety of new challenges. As a mobile search engineer, it's clear to me that while many mobile sites were designed with mobile viewing in mind, they weren’t designed to be search friendly. I'd like to help ensure that your mobile site is also available for users of mobile search.

Here are troubleshooting tips to help ensure that your site is properly crawled and indexed:

Verify that your mobile site is indexed by Google

If your web site doesn't show up in the results of a Google mobile search even using the 'site:' operator, it may be that your site has one or both of the following issues:
Googlebot may not be able to find your site
Googlebot, our crawler, must crawl your site before it can be included in our search index. If you just created the site, we may not yet be aware of it. If that's the case, create a Mobile Sitemap and submit it to Google to inform us to the site’s existence. A Mobile Sitemap can be submitted using Google Webmaster Tools, in the same way as with a standard Sitemap.
Googlebot may not be able to access your site
Some mobile sites refuse access to anything but mobile phones, making it impossible for Googlebot to access the site, and therefore making the site unsearchable. Our crawler for mobile sites is "Googlebot-Mobile". If you'd like your site crawled, please allow any User-agent including "Googlebot-Mobile" to access your site. You should also be aware that Google may change its User-agent information at any time without notice, so it is not recommended that you check if the User-agent exactly matches "Googlebot-Mobile" (which is the string used at present). Instead, check whether the User-agent header contains the string "Googlebot-Mobile". You can also use DNS Lookups to verify Googlebot.

Verify that Google can recognize your mobile URLs

Once Googlebot-Mobile crawls your URLs, we then check for whether the URL is viewable on a mobile device. Pages we determine aren't viewable on a mobile phone won't be included in our mobile site index (although they may be included in the regular web index). This determination is based on a variety of factors, one of which is the "DTD (Doc Type Definition)" declaration. Check that your mobile-friendly URLs' DTD declaration is in an appropriate mobile format such as XHTML Mobile or Compact HTML. If it's in a compatible format, the page is eligible for the mobile search index. For more information, see the Mobile Webmaster Guidelines.

If you have any question regarding mobile site, post your question to our Webmaster Help Forum and webmasters around the world as well as we are happy to help you with your problem.

Posted by Jun Mukai, Software Engineer, Mobile Search Team

[Gd] Dev Channel Update

| More

Google Chrome Releases: Dev Channel Update

The Dev channel has been updated to for all platforms (Windows, Mac, and Linux). In this release we are continuing our focus on feature polish, stability improvements, and extensions work.

All Platforms
  • [r31412] Kiosk Mode implementation. (Issue: 23145 – doesn't work on OS X yet)
  • Update covered by extensions and all platforms.
  • [r31428], [r31429], [r31432], [r31462] Download shelf polish.
  • [r31156] Download shelf animates out (Issue: 25602)
  • [r31200] Bookmark All Tabs... works (Issue: 25099)
  • [r31227] Engooden SSL code – saving on google docs should no longer hang forever (Issue: 21268)
  • [r31369, r31197] More keyboard handling fixes (Issue: 25856, 26115)
  • [r31287] Send keypress() events for more keys – ctrl-1 now works in docs (Issue: 25249)
  • [r31316] New Tab button has "pressed" and "hover" states (Issue: 26205)
  • [r31330] "Paste and Match Style" now has shortcut cmd-opt-shift-v instead of cmd-shift-v (Issue: 25205)
  • [r31297] Mouse tracking now works correctly on full-window plugins (Issue: 25288)
  • [r31561] Transparent plugins now draw correctly (Issue: 25820)
  • [r31585] PDFs downloaded again instead of being opened by the QuickTime plugin (Issue: 26075)
  • [r31157] Don't reload extensions management page to refresh after install/uninstall/disable (Issue: 26163)
  • [r31179] User scripts not installed depending on download settings (Issue: 26801)
  • [r31204] Implement alert(), prompt(), and confirm() for extensions (Issue: 12126)
  • [r31285] Installed extensions should not have the 'reload' option (Issue: 26901)
  • [r31335] Make inspector for background page stay open across reloads (Issue: 25287)
  • [r31365] Add an info bubble after extension installation (Issue: 21412)
  • [r31540] Added a confirmation on extension uninstallation. (Issue: 27162)

Known Issues:
  • Crash when closing a browser window when page action extensions are installed on Linux. (Issue: 25558)
  • Crash when clicking on the "Reload" button on the extension crashed info bar (Issue: 27199).
  • WebDatabase has been temporarily disabled.

More details about additional changes are available in the svn log of all revisions.

You can find out about getting on the Dev channel here:

If you find new issues, please let us know by filing a bug at

Anthony Laforge

Google Chrome Program Manager

Thursday, November 12, 2009

[Gd] Stable Update: Fix Google Chrome not Starting

| More

Google Chrome Releases: Stable Update: Fix Google Chrome not Starting

Google Chrome's Stable channel has been updated to to fix a potential issue that could cause Google Chrome to stop working and a security issue.

This release removes a dependency on a Windows library (t2embed.dll) that is not required by Google Chrome. If that library is missing or the user does not have permission to read it, earlier versions of Google Chrome would fail silently.

Security Fix:
CVE-2009-2816 Custom headers incorrectly sent for CORS OPTIONS request

A malicious web site operator could set custom HTTP headers on cross-origin OPTIONS requests.

Severity: Low. The majority of users are unlikely to be impacted by this issue.
Credit: Apple Security
  • A victim would need to visit a page under an attacker's control.
  • The OPTIONS attribute is not widely supported by servers.

Mark Larson, Google Chrome Team


[Gd] A 2x Faster Web

| More

Chromium Blog: A 2x Faster Web

Today we'd like to share with the web community information about SPDY, pronounced "SPeeDY", an early-stage research project that is part of our effort to make the web faster. SPDY is at its core an application-layer protocol for transporting content over the web. It is designed specifically for minimizing latency through features such as multiplexed streams, request prioritization and HTTP header compression.

We started working on SPDY while exploring ways to optimize the way browsers and servers communicate. Today, web clients and servers speak HTTP. HTTP is an elegantly simple protocol that emerged as a web standard in 1996 after a series of experiments. HTTP has served the web incredibly well. We want to continue building on the web's tradition of experimentation and optimization, to further support the evolution of websites and browsers. So over the last few months, a few of us here at Google have been experimenting with new ways for web browsers and servers to speak to each other, resulting in a prototype web server and Google Chrome client with SPDY support.

So far we have only tested SPDY in lab conditions. The initial results are very encouraging: when we download the top 25 websites over simulated home network connections, we see a significant improvement in performance - pages loaded up to 55% faster. There is still a lot of work we need to do to evaluate the performance of SPDY in real-world conditions. However, we believe that we have reached the stage where our small team could benefit from the active participation, feedback and assistance of the web community.

For those of you who would like to learn more and hopefully contribute to our experiment, we invite you to review our early stage documentation, look at our current code and provide feedback through the Chromium Google Group.

Posted by Mike Belshe, Software Engineer and Roberto Peon, Software Engineer
This post is cross-posted at the Google Research Blog

Wednesday, November 11, 2009

[Gd] [Libraries][Update] Chrome Frame 1.0.1

| More

Google AJAX API Alerts: [Libraries][Update] Chrome Frame 1.0.1

Chrome Frame was updated to version 1.0.1

[Gd] Post-Halloween Treat: New Keywords User Interface!

| More

Official Google Webmaster Central Blog: Post-Halloween Treat: New Keywords User Interface!

Our team had an awesome Halloween and we hope you did too. Yes, the picture below is our team; we take our Halloween costumes pretty seriously. :)

As a post-Halloween treat, we're happy to announce a brand new user interface for our Keywords feature. We'll now be updating the data daily, providing details on how often we found a specific keyword, and displaying a handful of URLs that contain a specific keyword. The significance column compares the frequency of a keyword to the frequency of the most popular keyword on your site. When you click on a keyword to view more details, you will get a list of up to 10 URLs which contain that keyword.

This will be really useful when you re-implement your site on a new technology framework, or need to identify which URLs may have been hacked. For example, if you start noticing your site appearing in search results for terms totally unrelated to your website (for example, "Viagra" or "casino"), you can use this feature to find those keywords and identify the pages that contain them. This will enable you to eliminate any hacked content quickly.

Posted by Kurt Dresner, Tanya Gupta, and Sagar Kamdar, Webmaster Tools team

[Gd] Integrating Application with Intents

| More

Android Developers Blog: Integrating Application with Intents

Written in collaboration with Michael Burton,; Ivan Mitrovic, uLocate; and Josh Garnier, OpenTable.

OpenTable, uLocate, and worked together to create a great user experience on Android. We saw an opportunity to enable WHERE and GoodFood users to make reservations on OpenTable easily and seamlessly. This is a situation where everyone wins — OpenTable gets more traffic, WHERE and GoodFood gain functionality to make their applications stickier, and users benefit because they can make reservations with only a few taps of a finger. We were able to achieve this deep integration between our applications by using Android's Intent mechanism. Intents are perhaps one of Android's coolest, most unique, and under-appreciated features. Here's how we exploited them to compose a new user experience from parts each of us have.


One of the first steps is to design your Intent interface, or API. The main public Intent that OpenTable exposes is the RESERVE Intent, which lets you make a reservation at a specific restaurant and optionally specify the date, time, and party size.

Hereʼs an example of how to make a reservation using the RESERVE Intent:

startActivity(new Intent("com.opentable.action.RESERVE",

Our objective was to make it simple and clear to the developer using the Intent. So how did we decide what it would look like?

First, we needed an Action. We considered using Intent.ACTION_VIEW, but decided this didn't map well to making a reservation, so we made up a new action. Following the conventions of the Android platform (roughly <package-name>.action.<action-name>), we chose "com.opentable.action.RESERVE". Actions really are just strings, so it's important to namespace them. Not all applications will need to define their own actions. In fact, common actions such as Intent.ACTION_VIEW (aka "android.intent.action.VIEW") are often a better choice if youʼre not doing something unusual.

Next we needed to determine how data would be sent in our Intent. We decided to have the data encoded in a URI, although you might choose to receive your data as a collection of items in the Intent's data Bundle. We used a scheme of "reserve:" to be consistent with our action. We then put our domain authority and the restaurant ID into the URI path since it was required, and we shunted off all of the other, optional inputs to URI query parameters.


Once we knew what we wanted the Intent to look like, we needed to register the Intent with the system so Android would know to start up the OpenTable application. This is done by inserting an Intent filter into the appropriate Activity declaration in AndroidManifest.xml:

<activity android:name=".activity.Splash" ... >
<action android:name="com.opentable.action.RESERVE"/>
<category android:name="android.intent.category.DEFAULT" />
<data android:scheme="reserve" android:host=""/>

In our case, we wanted users to see a brief OpenTable splash screen as we loaded up details about their restaurant selection, so we put the Intent Filter in the splash Activity definition. We set our category to be DEFAULT. This will ensure our application is launched without asking the user what application to use, as long as no other Activities also list themselves as default for this action.

Notice that things like the URI query parameter ("partySize" in our example) are not specified by the Intent filter. This is why documentation is key when defining your Intents, which weʼll talk about a bit later.


Now the only thing left to do was write the code to handle the intent.

    protected void onCreate(Bundle savedInstanceState) {
final Uri uri;
final int restaurantId;
try {
uri = getIntent().getData();
restaurantId = Integer.parseInt( uri.getPathSegments().get(0));
} catch(Exception e) {
// Restaurant ID is required
startActivity( FindTable.start(FindTablePublic.this));
final String partySize = uri.getQueryParameter("partySize");

Although this is not quite all the code, you get the idea. The hardest part here was the error handling. OpenTable wanted to be able to gracefully handle erroneous Intents that might be sent by partner applications, so if we have any problem parsing the restaurant ID, we pass the user off to another Activity where they can find the restaurant manually. It's important to verify the input just as you would in a desktop or web application to protect against injection attacks that might harm your app or your users.

Calling and Handling Uncertainty with Grace

Actually invoking the target application from within the requester is quite straight-forward, but there are a few cases we need to handle. What if OpenTable isn't installed? What if WHERE or GoodFood doesn't know the restaurant ID?

Restaurant ID knownRestaurant ID unknown
User has OpenTableCall OpenTable IntentDon't show reserve button
User doesn't have OpenTableCall Market IntentDon't show reserve button

You'll probably wish to work with your partner to decide exactly what to do if the user doesn't have the target application installed. In this case, we decided we would take the user to Android Market to download OpenTable if s/he wished to do so.

    public void showReserveButton() {

// setup the Intent to call OpenTable
Uri reserveUri = Uri.parse(String.format( "reserve://",
Intent opentableIntent = new Intent("com.opentable.action.RESERVE", reserveUri);

// setup the Intent to deep link into Android Market
Uri marketUri = Uri.parse("market://search?q=pname:com.opentable");
Intent marketIntent = new Intent(Intent.ACTION_VIEW).setData(marketUri);

opentableButton.setVisibility(opentableId > 0 ? View.VISIBLE : View.GONE);
opentableButton.setOnClickListener(new Button.OnClickListener() {
public void onClick(View v) {
PackageManager pm = getPackageManager();
startActivity(pm.queryIntentActivities(opentableIntent, 0).size() == 0 ?
opentableIntent : marketIntent);

In the case where the ID for the restaurant is unavailable, whether because they don't take reservations or they aren't part of the OpenTable network, we simply hide the reserve button.

Publishing the Intent Specification

Now that all the technical work is done, how can you get other developers to use your Intent-based API besides 1:1 outreach? The answer is simple: publish documentation on your website. This makes it more likely that other applications will link to your functionality and also makes your application available to a wider community than you might otherwise reach.

If there's an application that you'd like to tap into that doesn't have any published information, try contacting the developer. It's often in their best interest to encourage third parties to use their APIs, and if they already have an API sitting around, it might be simple to get you the documentation for it.


It's really just this simple. Now when any of us is in a new city or just around the neighborhood its easy to check which place is the new hot spot and immediately grab an available table. Its great to not need to find a restaurant in one application, launch OpenTable to see if there's a table, find out there isn't, launch the first application again, and on and on. We hope you'll find this write-up useful as you develop your own public intents and that you'll consider sharing them with the greater Android community.


Tuesday, November 10, 2009

[Gd] Community Update: deferred, open source, and more

| More

Google App Engine Blog: Community Update: deferred, open source, and more

Here are some of the recent developments from the greater developer community.


Nick Johnson recently added a new module to the App Engine Python SDK which allows you to use the task queue service to execute deferred function calls. This library requires minimal configuration and makes it even easier to use tasks. Using it is as simple as calling deferred.defer with the function and arguments you want to call - for example:

from google.appengine.ext import deferred
deferred.defer(, "In a deferred task")

For more details, see the article.

Ruby as a runtime option

The JRuby App Engine project has done a lot of work making spin-up time less painful with the most recent release. They've also had some great contributions from the growing community, such as an Image API that is ImageScience compatible. Ruby programmers can now build applications using familiar tools like Sinatra and DataMapper, while having access to the full set of App Engine APIs for Java. Follow future developments on the jruby-appengine blog and watch for talks by the JRuby App Engine team at upcoming conferences: "Scaling on App Engine with Ruby and Duby" at RubyConf and "JRuby on Google App Engine" at JRubyConf.

An open source alternate environment: TyphoonAE

Open source efforts to provide alternate hosting environments for App Engine applications continue to expand, with the TyphoonAE project joining the existing AppScale project in providing a platform to run App Engine apps. TyphoonAE aims to provide a complete environment for hosting App Engine apps, and so far includes compatible datastore, memcache, task queue and XMPP implementations.


Juraj recently created and open sourced a library for the Java runtime which "generate[s] appropriate paging queries from a base query, and then using these queries to page through the data." This is intended to simplify iteration through all of the results in a very large result set.

AEJ Tools

If you love RESTful APIs then it would be worth your while to take a look at AEJ Tools. Their library provides "a server module which provides a Rest style access to your datastore data, and a client module which allows you to run queries remotely using the Groovy interactive console. It can be used to run queries, store new entities or as a data import/export tool."

An open source full text search library

The datastore related libraries just keep right on coming. Full text search is a highly requested feature for App Engine and those behind the open source appengine-search project have create a Python library which "[c]an defer indexing via Task Queue API. Uses Relation index strategy mentioned in Brett Slatkin's Google I/O talk."

Testing framework

Moving beyond the datastore, the developers at geewax have created an advanced testing framwork called GAE Testbed. It covers everything from isolated and quick unit tests to full scale end to end tests and builds on some of the great testing tools for App Engine which are already avilable.

Party Chat

In other news, the folks at techwalla have migrated their party chat app to App Engine and made the source code available. It uses the new XMPP API to simulate a chat room using your favorite XMPP client. You can read more specifics on their blog.

Other open source projects

Check out the open source projects wiki page for more open source App Engine applications and utilities. Since the last posting, nearly 15 new applications have been added, so keep checking in as new projects are added regularly. If you have a project of your own to add, just follow the instructions at the bottom of the same page.

That's brings our community update to a close. We're always interested in hearing about new and interesting projects which use App Engine. If you have something to share, drop us a line.

Posted by Jeff Scudder, App Engine Team


[Gd] AdWords Downtime: November 14th, 10am-2pm PDT

| More

AdWords API Blog: AdWords Downtime: November 14th, 10am-2pm PDT

We'll be performing routine system maintenance on Saturday, November 14th from approximately 10:00am to 2:00pm PDT. You won't be able to access AdWords or the API during this time frame, but your ads will continue to run as normal.

-Eric Koleda, AdWords API Team

[Gd] Go: A New Programming Language

| More

Google Code Blog: Go: A New Programming Language

Have you heard about Go? We released a new, experimental systems programming language today. It is open source and we're excited about sharing it with the development community. For more information, check out the Google Open Source blog.

By Robert Griesemer, Rob Pike, Ken Thompson, Ian Taylor, Russ Cox, Jini Kim and Adam Langley - The Go Team

[Gd] New YouTube API Version Available for Testing

| More

YouTube API Blog: New YouTube API Version Available for Testing

While we don't normally call out new releases of the Google Data YouTube API on this blog, we wanted to draw specific attention to the version that has just been pushed out to our staging servers. There are two specific changes that we'd like to give our developers and partners a chance to test before they go live. Both changes affect important areas of the API: ClientLogin authentication, and playback URLs in media:content entries.

We fully intend for the changes to be backwards compatible, and from the developer's perspective you should not have to change any code. But testing your code is always a best practice, so if you rely on ClientLogin or retrieving media playback URLs from the Google Data YouTube API, please repoint your code to and confirm functionality.

Any incompatibilities should be reported as soon as possible in our YouTube API Developer Forum. We expect to move the changes from the staging environment into production on November 17.

Monday, November 9, 2009

[Gd] Use compression to make the web faster

| More

Google Code Blog: Use compression to make the web faster

Every day, more than 99 human years are wasted because of uncompressed content.  Although support for compression is a standard feature of all modern browsers, there are still many cases in which users of these browsers do not receive compressed content.  This wastes bandwidth and slows down users' interactions with web pages.

Uncompressed content hurts all users. For bandwidth-constrained users, it takes longer just to transfer the additional bits. For broadband connections, even though the bits are transferred quickly, it takes several round trips between client and server before the two can communicate at the highest possible speed.  For these users the number of round trips is the larger factor in determining the time required to load a web page. Even for well-connected users these round trips often take tens of milliseconds and sometimes well over one hundred milliseconds.

In Steve Sounder's book Even Faster Web Sites, Tony Gentilcore presents data showing the page load time increase with compression disabled.  We've reproduced the results for three highest ranked sites from the Alexa top 100 with permission here:

Data, with permission, from Steve Souders, "Chapter 9: Going Beyond Gzipping," in Even Faster Web Sites (Sebastapol, CA: O'Reilly, 2009), 122.

The data from Google's web search logs show that the average page load time for users getting uncompressed content is 2.0 seconds, whereas the time for users getting compressed content is 1.6 seconds.  In a randomized experiment where we forced compression for some users who would otherwise not get compressed content, we measured a latency improvement of 300ms.  While this experiment did not capture the full 0.4 second difference, that is probably because users getting forced compression have older computers and older software.

We have found that there are 4 major reasons why users do not get compressed content: anti-virus software, browser bugs, web proxies, and misconfigured web servers.  The first three modify the web request so that the web server does not know that the browser can uncompress content. Specifically, they remove or mangle the Accept-Encoding header that is normally sent with every request.

Anti-virus software may try to minimize CPU operations by intercepting and altering requests so that web servers send back uncompressed content.  But if the CPU is not the bottleneck, the software is not doing users any favors.  Some popular antivirus programs interfere with compression.  Users can check if their anti-virus software is interfering with compression by visiting the browser compression test page at

Internet Explorer 6 removes the Accept-Encoding request header when it sends requests via a proxy that only supports  HTTP 1.0 requests.  The table below, generated from Google's web search logs, shows that IE 6 represents 36% of all search results that are sent without compression.  This number is far higher than the percentage of people using IE 6.

Data from Google Web Search Logs

There are a handful of ISPs,  where the percentage of uncompressed content is over 95%.  One likely hypothesis is that either an ISP or a corporate proxy removes or mangles the Accept-Encoding header.  As with anti-virus software, a user who suspects an ISP is interfering with compression should visit the brower compression test page at

Finally, in many cases, users are not getting compressed content because the websites they visit are not compressing their content.  The following table shows a few popular websites that do not compress all of their content. If these websites were to compress their content, they could decrease the page load times by hundreds of milliseconds for the average user, and even more for users on modem connections.

Data generated using Page Speed

To reduce uncompressed content, we all need to work together.
  • Corporate IT departments and individual users can upgrade their browsers, especially if they are using IE 6 with a proxy. Using the latest version of Firefox, Internet ExplorerOpera, Safari, or Google Chrome will increase the chances of getting compressed content.  A recent editorial in IEEE Spectrum lists additional reasons - besides compression - for upgrading from IE6.
  • Anti-virus software vendors can start handling compression properly and would need to stop removing or mangling the Accept-Encoding header in upcoming releases of their software.
  • ISPs that use an HTTP proxy which strips or mangles the Accept-Encoding header can upgrade, reconfigure or install a better proxy which doesn't prevent their users from getting compressed content.
  • Webmasters can use Page Speed (or other similar tools) to check that the content of their pages is compressed.
For more articles on speeding up the web, check out

By Arvind Jain, Engineering Director and Jason Glasgow, Staff Software Engineer