Friday, April 2, 2010

[Gd] [Language][Update] 6 New Transliteration Languages

| More

Google AJAX API Alerts: [Language][Update] 6 New Transliteration Languages

6 new languages have been added to the Transliteration API:
  • Amharic
  • Greek
  • Russian
  • Sanskrit
  • Serbian
  • Tigrinya

[Gd] [Loader][Release] Fixed loader caching bug in IE

| More

Google AJAX API Alerts: [Loader][Release] Fixed loader caching bug in IE

Fixed loader caching bug in IE.

[Gd] A word on site clinics

| More

Official Google Webmaster Central Blog: A word on site clinics

Webmaster Level: All

We try to communicate with webmasters in lots of different places. For example, when we send representatives to conferences we’re happy to participate in public site clinics where we share best practices on how to improve the crawlability and site architecture of websites suggested by the audience.

However, we also want to help users who can’t or don’t want to attend search conferences. To reach more people, we started doing free virtual site clinics in languages other than English. These site clinics help site owners make websites in such a way that they are more easily crawled, indexed, and returned by search engine crawlers, which in turn helps webmasters gain more visibility on the web.

We did a series of free virtual site clinics in Spanish last year which spanned 5 blog posts. The clinics covered real problems on real sites, and we posted the results on the Spanish Webmaster Central blog. If you read Spanish, I recommend you go read the different posts covering everything from issues with framed sites, to more technical domain setup.

In some countries we don’t have dedicated webmaster-focused blogs, but we still want to help webmasters in those countries. That means that you might occasionally see site clinic or webmaster-related posts on AdWords blogs such as the forthcoming ones on the Nordic AdWords blogs (which cover Danish, Finnish, Norwegian and Swedish). Recently when we posted some advice for webmasters on one of our AdWords blogs, we received questions about the relationship between Google’s search and advertising programs. We wanted to again reassure our users that the ranking of Google’s organic search results is entirely separate from our advertising programs. Furthermore, we do not give any preference to AdWords customers in our site clinics - everybody is welcome to participate. We’re simply posting this on local “AdWords” blogs because it’s the best way for us to reach webmasters in those communities and languages.

[Gd] Extending Google Calendar to support cross-organizational scheduling

| More

Google Apps Developer Blog: Extending Google Calendar to support cross-organizational scheduling

Editor's Note: This post was written by Gilad Goraly from ScheduleOnce, a company that provides meeting scheduling solutions for Google Calendar. We invited ScheduleOnce to share their experiences building an application on top of Google Apps utilizing some of our APIs.

ScheduleOnce provides meeting scheduling solutions for organizations using Google Apps. Our solutions extend Google Calendar to support cross-organizational scheduling and include private labeling and robust scheduling features. Since our solution is not a standalone application but an add-on to a calendaring and messaging platform, we were looking to integrate it with a platform that is open and inviting for third party vendors. Google Apps was the natural choice.

Google Calendar does a great job when it comes to scheduling meetings with members inside the organization. The free/busy view allows you to see attendee availability and select a time that is good for all. But what do you do when the meeting involves one or more external attendees? With ScheduleOnce it is possible to see the availability of all attendees, both internal and external, across domains, in one simple free/busy interface. The meeting organizer then selects a time and schedules the meeting. It is that simple. Now let’s see what’s behind the scenes and why we chose to develop it for Google Apps.

For the solution to work effectively it should be completely integrated into the user’s messaging and calendaring environment. This is why we chose to work with Google. Google Apps is an open messaging and calendaring platform with a convenient integration framework. We have the following integration points with Google Apps:

  1. The Google Apps Marketplace installation process adds a link in Google’s universal navigation and enables Single Sign On (SSO).

  2. The Google Gadget framework is used to include a gadget in Google Calendar and in Gmail.

  3. The Google Calendar API is used to seamlessly integrate with the user’s calendar.

Now lets look at each integration point and some of the APIs we used to make ScheduleOnce work seamlessly with Google Apps.

Installation from the Google Apps Marketplace

The new Google Apps Marketplace installation process provides two major benefits:
  1. A quick and easy installation process and a link in Google’s universal navigation

  2. Single Sign On (SSO)

Installation process

The Google Apps administrator follows a simple installation wizard. During this process the following is done by ScheduleoOnce:
  1. The Administrator approves a Two Legged OAuth (2LO) calendar API permission for all domain users. This means that ScheduleOnce can access the calendar API of every user in the domain without requiring the user to re-enter their credentials. This is very convenient for the user.

  2. Every user in the domain automatically gets a link to a Personal ScheduleOnce Dashboard on his Google universal navigation. This is set in the installation manifest:
    <Extension id="oneBarLink" type="link">
  3. The administrator selects a URL and a logo for the application. We use the SSO and ${DOMAIN_NAME} parameter to verify that the user is part of the domain so one cannot maliciously create ScheduleOnce instances. We also use the ${DOMAIN_NAME} parameter for presenting the domain custom logo:${DOMAIN_NAME}/images/logo.gif?alpha=1

Single Sign On (SSO)

During the installation process our URL is "white listed" at Google so we can authenticate the user by their Open ID. Behind the scenes, ScheduleOnce handles the authentication for the user as described here. In addition, the administrator grants permission to access the user calendar API for all the users in the domain. Since authentication for API access is done using 2-Legged-OAuth (2LO) we can directly connect to the user’s calendar.

On the server side there is a service that handles the 2LO requests
// Create a service for making 2LO requests
GOAuthRequestFactory requestFactory = new GOAuthRequestFactory("cl", "yourCompany-YourAppName-v1");
requestFactory.ConsumerKey = <CONSUMER_KEY>;
requestFactory.ConsumerSecret = <CONSUMER_SECERT>;
CalendarService service = new CalendarService(requestFactory.ApplicationName);
service.RequestFactory = requestFactory;
SSO and 2LO provide the needed security with the convenience of a transparent login. With this combination the ScheduleOnce application looks like any other Google Apps application. When trying to access the ScheduleOnce application before authenticating with Google Apps, the user will get the normal Google Apps login (so no one outside the domain can login). When entering the ScheduleOnce application from one of the Google Apps entry points (Google’s universal navigation or one of the gadgets) then the login is done behind the scenes and it is transparent to the user.

Gadgets in Google Calendar, Gmail and the Start Page

We used an OpenSocial gadget for providing meeting management functionality. Authentication here is a bit trickier since it should be done without the user doing anything actively. To identify the user we used SSO and Open ID, allowing the gadget to be installed on any Google Apps gadget container (such as Google Calendar or Gmail).

Gadget initialization includes UI and OpenSocial data request:
function initGadget(){
//Initializing the gadget tabs
tabs.addTab("Pending", {
contentContainer: document.getElementById("pending_id"),
callback: callback
tabs.addTab("Scheduled", {
contentContainer: document.getElementById("scheduled_id"),
callback: callback

//Initializing the OpenSocial data request
var idspec = opensocial.newIdSpec({ "userId" : "OWNER"});
var req = opensocial.newDataRequest();
req.add(req.newFetchPersonRequest(opensocial.IdSpec.PersonId.OWNER), "get_owner");
where the POST request looks something like this:
function makePOSTRequest(){
var url = baseURL + "GadgetHandler.aspx";
var postdata = {opensocialid : openSocialID};
var params = {};
postdata =;
params[] =;
params[]= postdata;
params[] =;, responseHandler, params);
When the gadget runs it loads a hidden iframe to check if the user (by his Open ID) is authenticated on the server. If the user is not authenticated, authentication is done behind the scenes and the user’s Open ID is added to a temporary application object. If the user’s Open ID mapping exists on the application object it means the user is authenticated and can work freely in the ScheduleOnce application.

Integration with Google Calendar

Google Calendar is accessed using the Google Calendar API. Three major functions are used:
  1. Getting the user's busy times: Since availability is dynamic and dependent on the current calendar status we update it every time it is needed. This ensures there will never be any double booking in the user’s calendar.
    // Retrieving the calendar events for a time frame
    EventQuery query = new EventQuery();
    string uriString = "" + username + "/private/full?ctz=utc";

    query.Uri = new Uri(uriString);
    query.StartTime = <time frame start>;
    query.EndTime = <time frame end>;

    EventFeed calFeed = service.Query(query);
    AtomEntryCollection entryColl = calFeed.Entries;
  2. Creating a calendar with tentative meeting times: ScheduleOnce cannot "lock" certain timeframes until the scheduling process is over. For this reason we create a tentative meetings calendar that can be used to see times that were proposed for a meeting.
    // Creating a calendar
    CalendarEntry calendar = new CalendarEntry();
    calendar.Title.Text = <calendar title>;
    calendar.Summary.Text = <calendar description>;
    calendar.TimeZone = <calendar timezone>;
    Uri postUri = new Uri("");
    CalendarEntry createdCalendar = (CalendarEntry)service.Insert(postUri, calendar);
  3. Scheduling the meeting and sending the invitation via Google Calendar: When the meeting organizer schedules the meeting, a meeting entry is created in his calendar and meeting invitations are sent to all attendees.
    // Creating a meeting
    EventEntry entry = new EventEntry();
    entry.Title.Text = <meeting Title>;
    When eventTime = new When(<meeting startTime>, <meeting endTime>);
    Uri postUri = new Uri("" + username + "/private/full?ctz=utc");
    AtomEntry insertedEntry = service.Insert(postUri, entry);
Tip: Use Calendar API calls in a batch when possible to increase performance.

Using Documentation and Help Forums

Google provides very good documentation that can be accessed here. Marketplace listing and installation are very simple and instructions can be found on the Google Marketplace site. If you don't find what you need in the documentation, the first place to look is the Google Apps Discussion Groups. We were able to find everything we needed to build and integrate ScheduleOnce with Google Apps. The APIs and documentation made the job easy.

Check out ScheduleOnce at

By Gilad Goraly, ScheduleOnce

Thursday, April 1, 2010

[Gd] Look ma, no plugin!

| More

Google Web Toolkit Blog: Look ma, no plugin!

The new crop of HTML5 web browsers are capable of some pretty amazing things, and several of our engineers decided to take some 20% time to see how far we could push them. The result? An HTML5 port of Id's Quake II game engine!

We started with the existing Jake2 Java port of the Quake II engine, then used the Google Web Toolkit (along with WebGL, WebSockets, and a lot of refactoring) to cross-compile it into Javascript. You can see the results in the video above -- we were honestly a bit surprised when we saw it pushing over 30 frames per second on our laptops (your mileage may vary)!

It's still early days for WebGL, so you won't be able to run it without a bleeding edge browser, but if you'd like to check out the code and give it a whirl yourself, you can find it here. Enjoy!


[Gd] Dev Channel Update

| More

Google Chrome Releases: Dev Channel Update

The Dev channel has been updated to 5.0.366.2 for Windows and Linux, and 5.0.366.0 for Mac.

  • [r42843] Fixed a bug with incognito extensions like RSS Subscription. (Issue: 39351)
  • [r42400] Will no longer automatically offer to translate in incognito mode. (Issue: 38107)
  • [r42981] Fix file upload code to not hang the HTTP session when the file is unreadable. (Issue: 30850)
  • See all
  • [r42411] Improvements to the bookmark manager (Issue: 39085)
  • [r42548] Re-enable pinned tabs; add support for mini and app tabs. (Issue: 36798)
  • [r43157] Fix selection issues with the cookie manager after deleting cookies. (Issue: 33320)
  • Some minor UI changes in the omnibox.
Known Issues
  • The location bar is undergoing renovation. Please excuse our dust.
Important Notes
WebGL support in Chrome now runs inside the security sandbox. If you
have been testing WebGL, please remove the --no-sandbox flag from your
Chrome options. WebGL may be enabled via the --enable-webgl command
line option.

More details about additional changes are available in the svn log of all revision.

You can find out about getting on the Dev channel here:

If you find new issues, please let us know by filing a bug at

Anthony Laforge
Google Chrome

[Gd] [Loader][Release] Turned on 60 minute private caching of the loader.

| More

Google AJAX API Alerts: [Loader][Release] Turned on 60 minute private caching of the loader.

Turned on 60 minute private caching of the loader.

[Gd] [Search][Release] Fixed src='' bug

| More

Google AJAX API Alerts: [Search][Release] Fixed src='' bug

Fix for src='' bug.

Wednesday, March 31, 2010

[Gd] Helping you help us help you

| More

Google AJAX APIs Blog: Helping you help us help you

As I mentioned in a previous post, we've taken several measures to help differentiate legitimate API traffic from bad requests. To help us serve you better, I'm pleased to announce a new way for you to identify your request as harmful. Beginning today, please include the &evil=true parameter in your API requests if you're one of the bad guys.

How does this work in practice? Here's an example query which lets Google know that you're intending to use the API for nefarious purposes. This way, we can respond to your request in the appropriate manner as efficiently as possible.

Note: In order to encourage adoption as quickly as possible, we are requiring all bad requests to include the evil bit by the end of today, April 1.

[Gd] Dev Update: Chrome Frame

| More

Google Chrome Releases: Dev Update: Chrome Frame

The Dev channel has been updated to 5.0.366.0 for ChromeFrame.

  • Better integration with Internet Explorer’s popup blocker.
  • Fixes issues regarding switching to chrome frame with meta tag.
  • Fixes a number of crashes.
  • Fixes issues with URL referrer parsing

More details about additional changes are available in the svn log of all revision.

You can find out about getting on the Dev channel here:

If you find new issues, please let us know by filing a bug at

Anthony Laforge, Google Chrome


[Gd] DNS Verification FTW

| More

Official Google Webmaster Central Blog: DNS Verification FTW

Webmaster Level: Advanced

A few weeks ago, we introduced a new way of verifying site ownership, making it easy to share verified ownership of a site with another person. This week, we bring you another new way to verify. Verification by DNS record allows you to become a verified owner of an entire domain (and all of the sites within that domain) at once. It also provides an alternative way to verify for folks who struggle with the existing HTML file or meta tag methods.

I like to explain things by walking through an example, so let's try using the new verification method right now. For the sake of this example, we'll say I own the domain I have several websites under, including, and I could individually verify ownership of each of those sites using the meta tag or HTML file method. But that means I'd need to go through the verification process three times, and if I wanted to add, I'd need to do it a fourth time. DNS record verification gives me a better way!

First I'll add to my account, either in Webmaster Tools or directly on the Verification Home page.

On the verification page, I select the "Add a DNS record" verification method, and follow the instructions to add the specified TXT record to my domain's DNS configuration.

When I click "Verify," Google will check for the TXT record, and if it's present, I'll be a verified owner of and any associated websites and subdomains. Now I can use any of those sites in Webmaster Tools and other verification-enabled Google products without having to verify ownership of them individually.

If you try DNS record verification and it doesn't work right away, don't despair!

Sometimes DNS records take a while to make their way across the Internet, so Google may not see them immediately. Make sure you've added the record exactly as it’s shown on the verification page. We'll periodically check, and when we find the record we'll make you a verified owner without any further action from you.

DNS record verification isn't for everyone—if you don't understand DNS configuration, we recommend you continue to use the HTML file and meta tag methods. But for advanced users, this is a powerful new option for verifying ownership of your sites.

As always, please visit the Webmaster Help Forum if you have any questions.

Posted by Sean Harding, Software Engineer

[Gd] How Atlassian integrated with Google Apps - Part 2

| More

Google Apps Developer Blog: How Atlassian integrated with Google Apps - Part 2

Editor's note: This post was written by Richard Wallace of Atlassian. This is part two of a two-part series on Atlassian's recent development for the Google Apps Marketplace. Atlassian's JIRA Studio is a hosted software development suite, combining Subversion source control, issue tracking, wiki collaboration, code reviews and continuous integration. JIRA Studio is now integrated with Google Apps and is featured in the Google Apps Marketplace - a great tool for development teams using Google Apps. This series explores some of what Atlassian worked on, and what they learned along the way.

Part 2: Three hurdles to integrating Google Talk with JIRA Studio

In a previous post, I described how we came up with the idea for the JIRA Studio Activity Bar as part of our Google Apps Marketplace and JIRA Studio integration. Things were going pretty well. We were able to get data from Google Apps using the Google Apps data APIs using OAuth for authorization. We were able to connect to the Google Talk servers and retrieve a users buddy list and get buddy updates. We could send and receive messages to buddies. More importantly, we were able to do it in a scalable way using Comet techniques, such as long-polling.

You didn’t think it was going to be that easy, did you?

Things were going well. Which is to say, the ground hadn't yet fallen out from under us. That is, until someone on the JIRA Studio team realized that even though we were deploying to Tomcat 6 and using Atmosphere to take advantage of asynchronous IO, we weren't really going to be saving that many resources. He explained that the standard Studio setup is to run all our application servers behind Apache. Ok, that's typical enough. What's the big deal? Well, apparently Apache can be a bit of a resource hog.

Your asynchronous IO is nice and all, but…

The Studio Apache configuration uses prefork mode. Each of those Apache processes takes up about 20MB. And since Apache uses blocking IO, all the effort to scale on the application server side might be for nothing if we tied up a bunch of Apache processes that would eat up the limited amounts of memory available. We discussed switching to using a worker threads configuration, but decided it didn't really buy us that much in savings. We suggested deploying all the applications behind a web server implemented using asynchronous IO, like Nginx, but that would have been too big a configuration change to make without adequate time to test.

Finally, we hit upon a solution. We can just have browsers connect directly to the application server that the ActivityBar webapp was running on! All the regular Studio apps could continue running at and the ActivityBar webapp would run at or something similar. Users wouldn't care because the ActivityBar webapp is only ever accessed in the background. Ok, we can put that issue to bed.

You can't do that!

Wait, what's that, you say? Oh no, you're right! Now we're violating the same origin policy that we had been able to avoid before! Hold on a second, though. If my memory serves me correctly, I remember seeing a solution to this very problem when we were working on the new OpenSocial-based dashboard system in JIRA 4. There is an RPC system that allows the gadget - an iframe which in some circumstances is loaded from a different origin than the container - to send messages to the container. It has about 5 different implementations for all the different variations of browsers that are out there - it uses window.postMessage on all the latest browsers that support it, some funky VBScript for IE 6 & 7, a few other techniques for older Gecko and Webkit, and the fallback method using nested iframes. Could we use that? It's been developed and battle-tested by Google developers, and it would fit our needs perfectly. But how easy would it be to just drop it in?

As luck would have it, the Shindig RPC JavaScript code can be pretty easily adapted to run outside Shindig. All we needed to do was create an iframe that is loaded from the ActivityBar webapp, use that to do all of our Ajax requests and long polling with the ActivityBar server, and use the adapted Shindig RPC system to send messages between the application page and the iframe. Phew, disaster averted.

But wait, what about…

The HTTP spec, section 8.1.4 specifies that a "single-user client SHOULD NOT maintain more than 2 connections with any server or proxy." Most browsers these days tend to stretch that number to a maximum of 8 connections. IE6 and IE7 still respect that 13 year old suggestion, so if a user opens more than two windows or tabs to their instance of Studio, each with the ActivityBar running, in IE6 and IE7 that will eat up all their connections and additional windows or tabs will appear to hang while they wait for a connection to be available. But wait, it gets better! If your Studio usage habits are anything like mine then you typically have 6-10 tabs open with issues, issue searches, reviews, wiki pages, etc. So even on browsers where that limit is stretched, you're likely to run into the same problem.

A bit of research turned up a number of workarounds, but the one we chose was to use multiple subdomains. Since we had already done the work to solve the same origin policy we didn't need to worry about that, it would "just work". The big question to answer was whether or not we'd be able to make this change in our deployment environment on such short notice. The guys over at Contegix came through for us on this one and got the deployment environment for JIRA Studio modified to setup the DNS aliasing that we needed. Now, if you look in Firebug or other web browser debugging tools, you'll see connections being made to in one tab and in another.

Now that that is over…

Phew! We can finally step back and enjoy the fruits of our labor! Those were some major technical hurdles that we had to learn about and overcome in a short amount of time. It's hard to believe all that took place in the short span of a few months. Everyone at Atlassian and Contegix really pulled together nicely to make it all happen. And, again, special thanks to Jean-Francois from the Atmosphere project for rapidly fixing bugs that I reported!
But like most projects, just went you’re ready to step back and enjoy what you’ve built, you’re off to the races on future iterations. We installed this on our Studio instance so we could dogfood it and already have plenty of ideas for improvements, including:
  • More quick add links: we've got a quick link so you can insert, into your chat, a link to the current page you're on with a single click. On the "Upcoming Events" Google Calendar tab there is a "Quick add" button. But we can also add ‘quick add” links for issues, wiki pages, and other convenience functions.
  • Auto IM Translation: a nifty feature Google makes available to us, and we’re excited about putting it into action.
  • Streaming application updates: right now the application feeds are fetched from the browser when they are needed and they are only fetched once to avoid sending 7-8 spurious HTTP requests. It would be much cooler if the ActivityBar webapp made these requests on behalf of the browser and, using the communications link already established, sent updates as they were found. Then you could sit on your JIRA dashboard and have continuous feedback on everything going on in your project.
  • Installable on your own servers!: one goal in all of our integration work with Studio is to make it possible for behind-the-firewall customers to take advantage of some of these features. There is a little bit of work in the ActivityBar webapp itself that still needs to be done to make this possible - we're currently hard-wired to connect to the Google Talk servers, for instance. Most of the effort to make this work in behind the firewall settings will need to be done in the apps themselves. We had to customize each application to get the ActivityBar to show up on every single page across every single app, something that isn't possible with the current plugin system in our applications. Something that comes close is Web Resource Contexts in Confluence, but there are a few additional bits of information that are required that can't be provided purely as web-resources. But stay tuned, because if we do that it will most likely be pluggable - like all Atlassian's other apps - and you'll be able to add your own application tabs!

We’re pretty excited about what we’ve built, and even more excited for developers and software development teams inside companies using Google Apps to give JIRA Studio a try. We think you’ll enjoy the work we’ve done. Check it out at

By Richard Wallace, Atlassian

[Gd] Easy Performance Profiling with Appstats

| More

Google App Engine Blog: Easy Performance Profiling with Appstats

Since App Engine debuted 2 years ago, we’ve written extensively about best practices for writing scalable apps on App Engine. We make writing scalable apps as easy as possible, but even so, getting everything right can be a challenge - and when you don’t, it can be tough to figure out what’s going wrong, or what’s slowing your app down.

Thanks to a new library, however, you no longer need to guess. Appstats is a library for App Engine that enables you to profile your App Engine app’s performance, and see exactly what API calls your app is making for a given request, and how long they take. With the ability to get a quick overview of the performance of an entire page request, and to drill down to the details of an individual RPC call, it’s now easy to detect and eliminate redundancies and inefficiencies in the way your App Engine app works. Appstats is available for both the Python and Java runtimes.

Enabling Appstats is remarkably straightforward. In Python, it can be made to work with any Python webapp framework. For the full lowdown, see the docs, but here’s the quickstart if you’re using a supported framework. First, create or open a file called in your app’s root directory, and add the following to it:

def webapp_add_wsgi_middleware(app):
from google.appengine.ext.appstats import recording
app = recording.appstats_wsgi_middleware(app)
return app

In Java, appstats works by installing a filter. Again, the lowdown is here, but for a quickstart, add this to your web.xml:

<param-value>Appstats available: /appstats/details?time={ID}</param-value>

This installs the Appstats event recorder, which is responsible for recording API calls made by your app so you can review them later. The recorder is pretty low overhead, so you can even use it on a live site, if you wish - though you may want to disable it once you no longer require it.

The other component of Appstats is the administrative interface. To install this on the Python runtime, you need to make a change to your app.yaml file. Add the following block inside the ‘handlers’ section of app.yaml:

- url: /stats.*
script: $PYTHON_LIB/google/appengine/ext/appstats/

And for Java, add this to your web.xml:



The url here - ‘/stats’ - can be anything you like, as long as it ends with ‘/stats’, and is the URL that you can access the Appstats admin console over.

For additional ease-of-use, we can add the admin interface as a custom admin console page. In Python, add the following block to the end of app.yaml:

- name: Appstats
url: /stats

Similarly, you can do this in Java by adding this to appengine-web.xml:

<page name="Appstats" url="/stats" />

After redeploying your app, you should now see a new option in your app’s admin console labelled ‘Appstats’. Clicking on this will show you the appstats admin console.

The main page of the admin console provides some high level information on RPC calls made and URL paths requested, and, down the bottom, a history of recent requests. Clicking the plus button on any of these will expand the entry to show more details about it, but for the really juicy details, click on an individual request in the Requests History section.

This page goes into detail on what happened in an individual request to our app. The timeline is of particular interest: Each row represents an individual RPC call that was made, with the start time, end time, and CPU time consumed all noted by means of a chart on the right. As you can see, it’s quite possible for the CPU time consumed by an RPC call to exceed the wall clock time - this typically occurs when multiple machines are involved in assembling a reply to your RPC request.

Let’s take a look at the code in question. Clicking on the datastore_v3.RunQuery RPC call to expand, we can get a complete stacktrace for the RPC call, and by clicking on a stack frame, Appstats shows us the source code in question! We see that the culprit looks something like this:

def get_questions(self):
return models.Question.all()

This is a really frequent query - it’s executed every time anyone loads the front page - so it’s a prime candidate for the memcache API. Modifying the function to take advantage of it is straightforward:

def get_questions(self):
questions = memcache.get("latest_questions")
if not questions:
questions = models.Question.all().fetch(50)
memcache.set("latest_questions", questions, time=60)
return questions

All we’re doing here is first, checking if the results are already available in memcache. If they’re not, we fetch them the regular way, by doing a datastore query, and store them in memcache for future reference, telling it to keep them around for 60 seconds.

60 seconds is relatively high for data that may change often, but we figure users won’t be bothered by this on our site. Much shorter timeouts - as low as just a few seconds - can still save huge amounts of resources, especially on popular sites. Few users will worry about a 5 second cache timeout, but if you’re getting 100 queries a second, you’ve just eliminated 99.8% of your query overhead!

If we repeat our request with the new code, we can take a look at the updated statistics and note the improvement on a ‘warm’ request that fetches data from memcache:

Much better! Faster by every metric: We’ve cut a chunk off the wallclock time, so our users get pages faster, and we’ve reduced CPU time and API CPU time as well! As you can imagine, even more dramatic improvements are possible for more complex applications.

Appstats can also help you with your wardrobe: participate in the Appstats contest for the best before/after screenshots of Application Stats. Post your screenshots online and link to them on Twitter copying @app_engine and using the hashtag #coolappstats, before May 2nd 2010. The coolest pair of screenshots will be used to create a Google App Engine T-shirt, and we will send that T-shirt, autographed by the App Engine team, to the winner.


Tuesday, March 30, 2010

[Gd] Stable Update: Disable Translate

| More

Google Chrome Releases: Stable Update: Disable Translate

Google Chrome has been released to the Stable channel on Windows.

This release fixes two issues:

  • Fix to prevent crashes with the LastPass extension (Issue 38857)
  • Add the option to disable 'Offer to translate pages that aren't in a language I read' in Options > Under the Hood

This release also addresses one minor security issue:

If you find issues, please let us know:

--Mark Larson, Google Chrome Team

[Gd] OAuth access to IMAP/SMTP in Gmail

| More

Google Code Blog: OAuth access to IMAP/SMTP in Gmail

Google has long believed that users should be able to export their data and use it with whichever service they choose. For years, the Gmail service has supported standard API protocols like POP and IMAP at no extra cost to our users. These efforts are consistent with our broader data liberation efforts.

In addition to making it easier for users to export their data, we also enable them to authorize third party (non-Google developed) applications and websites to access their data at Google. One of the more common examples is allowing a social network to access your address book in order to send invitations to your friends.

While it is possible for a user to authorize this access by disclosing their Google Account password to the third party app, it is more secure for the app developer to use the industry standard protocol called OAuth which enables the user to give their consent for specific access without sharing their password. Most Google APIs support this OAuth standard, and starting today it is also available for the IMAP/SMTP feature of Gmail.

The feature is available in Google Code Labs and we have provided a site with documentation and sample code. In addition, Google has begun working with other companies like Yahoo and Mozilla on a formal Internet standard for using OAuth with IMAP/SMTP (learn more at the OAuth for IMAP mailing list).

One of the first companies using this feature is Syphir, in their SmartPush application for the iPhone, as shown in the screenshots below. Unlike other push apps, Sypher's SmartPush application never sees or stores the user’s Gmail password thanks to this new OAuth support.

We look forward to finalizing an Internet standard for using OAuth with IMAP/SMTP, and working with IMAP/SMTP mail clients to add that support.

By Eric Sachs, Senior Product Manager

[Gd] OAuth Authentication for Google Mail IMAP and SMTP

| More

Google Apps Developer Blog: OAuth Authentication for Google Mail IMAP and SMTP

In 2007, Google Mail introduced IMAP access for all users. The only way to login to IMAP was with a Google password. Meanwhile OAuth, an industry-standard authorization protocol, has been developed. Websites have used OAuth to securely access a user’s data via Google APIs (such as contacts, calendars, and docs) once access is granted by the user. Today we are announcing the ability to authenticate to Google Mail IMAP and SMTP with OAuth. To do this, we created an experimental SASL mechanism called “XOAUTH”.

The old way of logging in to Google Mail IMAP looked like this:
01 LOGIN P4$$w0rd
Simple, but it required every device and third-party application to have a copy of the user’s Google password. That’s bad for security, and everything breaks when the user changes his password. OAuth support for IMAP and SMTP allows web, mobile and desktop applications to securely access a user’s e-mail and send e-mail on their behalf with their permission. Users now only need to approve access to their e-mail on the traditional OAuth authorization page:

After access is approved, the app can connect via IMAP and send a request like this:

OK, it’s not pretty, but we’ve got lots of sample code to help you generate the magic string you need to send us. The nice thing is that OAuth tokens are independent of user passwords, so they keep working through password changes. And you can worry a little less about the nightmare of hackers stealing passwords out of your database. Each OAuth token has a limited scope, and can be individually revoked by the user.

We’re also working on an industry standard SASL mechanism for doing OAuth, and will roll that out as soon as it’s ready. We were so excited about the benefits of XOAUTH that we couldn’t wait to get it out there for people to use.

To get started with XOAUTH, check out the Gmail site on, which has documentation, a tutorial, and sample code.

Posted by Jamie Nicolson, Gmail Team

[Gd] URL removal explained, Part I: URLs & directories

| More

Official Google Webmaster Central Blog: URL removal explained, Part I: URLs & directories

Webmaster level: All

There's a lot of content on the Internet these days. At some point, something may turn up online that you would rather not have out there—anything from an inflammatory blog post you regret publishing, to confidential data that accidentally got exposed. In most cases, deleting or restricting access to this content will cause it to naturally drop out of search results after a while. However, if you urgently need to remove unwanted content that has gotten indexed by Google and you can't wait for it to naturally disappear, you can use our URL removal tool to expedite the removal of content from our search results as long as it meets certain criteria (which we'll discuss below).

We've got a series of blog posts lined up for you explaining how to successfully remove various types of content, and common mistakes to avoid. In this first post, I'm going to cover a few basic scenarios: removing a single URL, removing an entire directory or site, and reincluding removed content. I also strongly recommend our previous post on managing what information is available about you online.

Removing a single URL

In general, in order for your removal requests to be successful, the owner of the URL(s) in question—whether that's you, or someone else—must have indicated that it's okay to remove that content. For an individual URL, this can be indicated in any of three ways:Before submitting a removal request, you can check whether the URL is correctly blocked:
  • robots.txt: You can check whether the URL is correctly disallowed using either the Fetch as Googlebot or Test robots.txt features in Webmaster Tools.
  • noindex meta tag: You can use Fetch as Googlebot to make sure the meta tag appears somewhere between the <head> and </head> tags. If you want to check a page you can't verify in Webmaster Tools, you can open the URL in a browser, go to View > Page source, and make sure you see the meta tag between the <head> and </head> tags.
  • 404 / 410 status code: You can use Fetch as Googlebot, or tools like Live HTTP Headers or to verify whether the URL is actually returning the correct code. Sometimes "deleted" pages may say "404" or "Not found" on the page, but actually return a 200 status code in the page header; so it's good to use a proper header-checking tool to double-check.
If unwanted content has been removed from a page but the page hasn't been blocked in any of the above ways, you will not be able to completely remove that URL from our search results. This is most common when you don't own the site that's hosting that content. We'll cover what to do in this situation in a subsequent post.

If a URL meets one of the above criteria, you can remove it by going to, entering the URL that you want to remove, and selecting the "Webmaster has already blocked the page" option. Note that you should enter the URL where the content was hosted, not the URL of the Google search where it's appearing. For example, enter

This article has more details about making sure you're entering the proper URL. Remember that if you don't tell us the exact URL that's troubling you, we won't be able to remove the content you had in mind.

Removing an entire directory or site

In order for a directory or site-wide removal to be successful, the directory or site must be disallowed in the site's robots.txt file. For example, in order to remove the directory, your robots.txt file would need to include:
   User-agent: *
   Disallow: /secret/

It isn't enough for the root of the directory to return a 404 status code, because it's possible for a directory to return a 404 but still serve out files underneath it. Using robots.txt to block a directory (or an entire site) ensures that all the URLs under that directory (or site) are blocked as well. You can test whether a directory has been blocked correctly using either the Fetch as Googlebot or Test robots.txt features in Webmaster Tools.

Only verified owners of a site can request removal of an entire site or directory in Webmaster Tools. To request removal of a directory or site, click on the site in question, then go to Site configuration > Crawler access > Remove URL; click "New removal request;" and select the "directory" or "entire site" option.

Reincluding content

You can cancel removal requests for any site you own at any time, including those submitted by other people. In order to do so, you must be a verified owner of this site in Webmaster Tools. Once you've verified ownership, you can go to Site configuration > Crawler access > Remove URL > Removed URLs (or > Made by others) and click "Cancel" next to any requests you wish to cancel.

Still have questions? Stay tuned for the rest of our series on removing content from Google's search results. If you can't wait, much has already been written about URL removals, and troubleshooting individual cases, in our Help Forum. If you still have questions after reading others' experiences, feel free to ask. Note that, in most cases, it's hard to give relevant advice about a particular removal without knowing the site or URL in question. We recommend sharing your URL by using a URL shortening service so that the URL you're concerned about doesn't get indexed as part of your post; some shortening services will even let you disable the shortcut later on, once your question has been resolved.

Posted by Susan Moskwa, Webmaster Trends Analyst

[Gd] Dev update: Integrated Adobe Flash Player Plug-in

| More

Google Chrome Releases: Dev update: Integrated Adobe Flash Player Plug-in

The Google Chrome Dev channel has been updated to 5.0.360.4 for Windows and Mac and 5.0.360.5 for Linux.

This release includes:
  • An integrated Adobe Flash Player Plug-in. We're integrating Adobe Flash Player (10.1 beta 3) with Google Chrome so that you don't have to install it or worry about keeping it up-to-date. See the blog post on the Chromium blog for more details.

    To use the bundled Flash Player plug-in, add --enable-internal-flash to your command line or shortcut for starting Google Chrome.

  • A basic plug-in manager. The about:plugins page now lets you disable any plug-in from loading on all web pages. See the Known Issues section: this doesn't work in all cases yet if you already have Adobe Flash Player for Windows Firefox, Safari, or Opera installed.
Known Issues:
  • On Windows, if you have Adobe Flash Player for Windows Firefox, Safari, or Opera installed, the Flash plug-in will still work in some cases even if you decline the license agreement (when using --enable-internal-flash) or disable the Flash plugin from about:plugins. We're working on it.
  • If you disable (or enable) a plugin on about:plugins, your change does not take effect until you restart Google Chrome.
  • There is no bundled Adobe Flash Player plug-in for 64-bit Linux.
--Anthony LaForge, Google Chrome Program Manager

[Gd] Will the Real <Your Site Here> Please Stand Up?

| More

Official Google Webmaster Central Blog: Will the Real <Your Site Here> Please Stand Up?

Webmaster Level: Intermediate

In our recent post on the Google Online Security Blog, we described our system for identifying phishing pages. Of the millions of webpages that our scanners analyze for phishing, we successfully identify 9 out of 10 phishing pages. Our classification system only incorrectly flags a non-phishing site as a phishing site about 1 in 10,000 times, which is significantly better than similar systems. In our experience, these “false positive” sites are usually built to distribute spam or may be involved with other suspicious activity. If you find that your site has been added to our phishing page list (”Reported Web Forgery!”) by mistake, please report the error to us. On the other hand, if your site has been added to our malware list (”This site may harm your computer”), you should follow the instructions here. Our team tries to address all complaints within one day, and we usually respond within a few hours.

Unfortunately, sometimes when we try to follow up on your reports, we find that we are just as confused as our automated system. If you run a website, here are some simple guidelines that will allow us to quickly fix any mistakes and help keep your site off our phishing page list in the first place.

- Don’t ask for usernames and passwords that do not belong to your site. We consider this behavior phishing by definition, so don’t do it! If you want to provide an add-on service to another site, consider using a public API or OAuth instead.

- Avoid displaying logos that are not yours near login fields. Someone surfing the web might mistakenly believe that the logo represents your website, and they might be misled into entering personal information into your site that they intended for the other site. Furthermore, we can’t always be sure that you aren’t doing this intentionally, so we might block your site just to be safe. To prevent misunderstandings, we recommend exercising caution when displaying these logos.

- Minimize the number of domains used by your site, especially for logins. Asking for a username and password for Site X looks very suspicious on Site Y. Besides making it harder for us to evaluate your website, you may be inadvertently teaching your visitors to ignore suspicious URLs, making them more vulnerable to actual phishing attempts. If you must have your login page on a different domain from your main site, consider using a transparent proxy to enable users to access this page from your primary domain. If all else fails...

- Make it easy to find links to your pages. It is difficult for us (and for your users) to determine who controls an off-domain page in your site if the links to that page from your main site are hard to find. All it takes to clear this problem up is to have each off-domain page link back to an on-domain page which links to it. If you have not done this, and one of your pages ends up on our list by mistake, please mention in your error report how we can find the link from your main site to the wrongly blocked page. However, if you do nothing else...

- Don’t send strange links via email or IM. It’s all but impossible for us to verify unusual links that only appeared in your emails or instant messages. Worse, using these kinds of links conditions your users/customers/friends to click on strange links they receive through email or IM, which can put them at risk for other Internet crimes besides phishing.

While we hope you consider these recommendations to be common sense, we’ve seen major e-commerce and financial companies break these guidelines from time to time. Following them will not only improve your experience with our anti-phishing systems, but will also help provide your visitors with a better online experience.

Written by Colin Whittaker, Anti-Phishing Team

[Gd] Bringing improved support for Adobe Flash Player to Google Chrome

| More

Chromium Blog: Bringing improved support for Adobe Flash Player to Google Chrome

Adobe Flash Player is the most widely used web browser plug-in. It enables a wide range of applications and content on the Internet, from games, to video, to enterprise apps.

The traditional browser plug-in model has enabled tremendous innovation on the web, but it also presents challenges for both plug-ins and browsers. The browser plug-in interface is loosely specified, limited in capability and varies across browsers and operating systems. This can lead to incompatibilities, reduction in performance and some security headaches.

That’s why we are working with Adobe, Mozilla and the broader community to help define the next generation browser plug-in API. This new API aims to address the shortcomings of the current browser plug-in model. There is much to do and we’re eager to get started.

As a first step, we’ve begun collaborating with Adobe to improve the Flash Player experience in Google Chrome. Today, we’re making available an initial integration of Flash Player with Chrome in the developer channel. We plan to bring this functionality to all Chrome users as quickly as we can.

We believe this initiative will help our users in the following ways:

  • When users download Chrome, they will also receive the latest version of Adobe Flash Player. There will be no need to install Flash Player separately.

  • Users will automatically receive updates related to Flash Player using Google Chrome’s auto-update mechanism. This eliminates the need to manually download separate updates and reduces the security risk of using outdated versions.

  • With Adobe's help, we plan to further protect users by extending Chrome's “sandbox” to web pages with Flash content.

Improving the traditional browser plug-in model will make it possible for plug-ins to be just as fast, stable, and secure as the browser’s HTML and JavaScript engines. Over time this will enable HTML, Flash, and other plug-ins to be used together more seamlessly in rendering and scripting.

These improvements will encourage innovation in both the HTML and plug-in landscapes, improving the web experience for users and developers alike. To read more about this effort, you can read this post on the Flash Player blog.

Developers can download the Chrome developer channel version with Flash built in here. To enable the built-in version of Flash, run Chrome with the --enable-internal-flash command line flag.

Posted by Linus Upson, VP, Engineering

Monday, March 29, 2010

[Gd] Read Consistency & Deadlines: More control of your Datastore

| More

Google App Engine Blog: Read Consistency & Deadlines: More control of your Datastore

Last week we announced the 1.3.2 release of the App Engine SDK. We’re particularly excited about two new datastore features: eventually consistent reads, and datastore deadlines.

Read Consistency Settings

You now have the option to specify eventually consistent reads on your datastore queries and fetches. By default, the datastore updates and fetches data in a primary storage location, so reading an entity always has exactly up to date data, a read policy known as “strong consistency.” When a machine at the primary storage location becomes unavailable, a strongly consistent read waits for the machine to become available again, possibly not returning before your request handler deadline expires. But not every use of the datastore needs guaranteed, up-to-the-millisecond freshness. In these cases, you can tell the datastore (on a per-call basis) that it’s OK to read a copy of the data from another location when the primary is unavailable. This read policy is known as “eventual consistency.” The secondary location may not have all of the changes made to the primary location at the time the data is read, but it should be very close. In the most common case, it will have all of the changes, and for a small percentage of requests, it may be a few hundred milliseconds to a few seconds behind. By using eventually consistent reads, you trade consistency for availability; in most cases you should see a reduction in datastore timeouts and error responses for your queries and fetches.

Prior to this new feature, all datastore reads used strong consistency, and this is still the default. However, eventual consistency is useful in many cases, and we encourage using it liberally throughout most applications. For example, a social networking site that displays your friends’ status messages may not need to display the freshest updates immediately, and might prefer to show older messages when a primary datastore machine becomes unavailable, rather than wait for the machine to become available again, or show no messages at all with an error.

(Note that eventual consistency is never used during a transaction: transactions are always completely consistent.)

Datastore Deadlines

The datastore now also allows you to specify a deadline for your datastore calls, which is the maximum amount of time a datastore call can take before responding. If the datastore call is not completed by the deadline, it is aborted with an error and app execution can continue. This is especially useful since the datastore now retries most calls automatically, for up to 30 seconds. By setting a deadline that is smaller than that, you allow the datastore to retry up to the amount of time that you specify, while always returning control to your app within the deadline window. If your application is latency sensitive, or if you’d prefer to take an alternate action when a request takes too long (such as displaying less data or consulting a cache), deadlines are very useful: they give your application more control.

Setting the Read Policy and Datastore Deadline

To enable deadlines and eventual consistency with Python, you create an RPC object with the function create_rpc() and set the deadline and read_policy on the object. You then pass the RPC object to the call as an argument. Here’s an example of how you would do this on a datastore fetch:

rpc = db.create_rpc(deadline=5, read_policy=db.EVENTUAL_CONSISTENCY)

results = Employee.all().fetch(10, rpc=rpc)

To set a deadline and datastore read policy in Java, you may call the methods addExtension() and setTimeoutMillis(), respectively, to a single Query object:

Query q = pm.newQuery(Employee.class);

q.addExtension("datanucleus.appengine.datastoreReadConsistency", "EVENTUAL");


You can also use these features in JDO and JPA using configuration. You can also use these features directly with the low-level Java datastore API. See the documentation for these features in Python and Java for more information.

Posted by The App Engine Team

[Gd] Student Applications Open for Google Summer of Code 2010

| More

Google Code Blog: Student Applications Open for Google Summer of Code 2010

Want to work on a cool open source project, hone your development skills with the help of a dedicated mentor, and get paid? Look no further - student applications are now open for Google Summer of Code™ 2010.

Since its inception in 2005, the Google Summer of Code program has brought together nearly 3,400 students and more than 3,000 mentors from nearly 100 countries worldwide - all for the love of code. Through the program, accepted student applicants are paired with a mentor or mentors from participating projects, thus gaining exposure to real-world software development scenarios. They also receive an opportunity for employment in areas related to their academic pursuits. And best of all, more source code is created and released for the benefit of users and developers everywhere.

Full details, including pointers on how to apply, are available on the Google Open Source Blog.

By Leslie Hawthorn, Google Open Source Team