Saturday, July 17, 2010

[Gd] There, but for the grace of testing, go I

| More

Google Testing Blog: There, but for the grace of testing, go I

By James A. Whittaker

I've had more than a few emails about "antenna-gate" asking me to comment and suggesting clever, stabbing rebukes to a fallen competitor. I might aim a few of those at my own team in the future, some were genuinely funny, but none of them will appear here. Instead I offer first a word of caution and second a reflection that my Mom used to intone whenever disaster occurred around her. It's called "counting your blessings."

First, a caution that those of us who live in glass houses really should keep stones at arms length. The only way anyone can rebuke Apple, without risk of waking up one morning sucking on their own foot, is if they write no software or have no users. Apple does a lot of the former and they enjoy many of the latter. Bugs like this make me sick when they are mine and nervous when they aren't. If any tester in the industry isn't taking stock right now then they either aren't producing any software or aren't in possession of any users, at least ones they wish to keep.

Second, taking stock has made me realize that I enjoy some important blessings that make the infinite task of testing so much more manageable. Indeed, the three blessings I count here are really the reason that testing doesn't fail more often than it does.

The Blessing of Unit Testing

I am thankful for early cycle testing thinning out the bug herd. In late cycle testing major bugs are often masked by minor bugs and too many of the latter can hamper the search for the former. Every bug that requires a bug report means lost time. There is the time spent to find the bug; time spent to reproduce and report it; time to investigate its cause and ensure it is not a duplicate; time to fix it, or to argue about whether it should be fixed; time to build the new version and push it to the test lab; time to verify the fix; time to test that the fix introduced no additional bugs. Clearly the smaller the population to begin with, the easier the task becomes. Solid unit testing is a tester's best friend.

The Blessing of Rarity

I am thankful that the vast majority of bugs that affect entire user populations are generally nuisance-class issues. These are typically bugs concerning awkward UI elements or the occasional misfiring of some feature or another where workarounds and alternatives will suffice until a minor update can be made. Serious bugs tend to have a more localized effect. True recall class bugs, serious failures that affect large populations of users, are far less common. Testers can take advantage of the fact that not all bugs are equally damaging and prioritize their effort to find bugs in the order of their seriousness. The futility of finding every bug can be replaced by an investigation based on risk.

Risk analysis is so important that we've built an internal tool to help guide testers in performing it. Code-named "Testify" this tool streamlines the process of risk analysis, at least the way we do it at Google. We're working on open-sourcing an early prototype in time for GTAC 2010 (I can hear my team cringing now ... "you promised it when?").

The Blessing of Repetition

I am thankful that user behavior is highly repetitive. There are features that enjoy heavy usage across user populations and features that are far less popular. Mobile phones are a good example of this. The phone is constantly establishing connections to networks. Certain features like making and receiving calls, texting and so forth are used more often than taking pictures or searching maps. The popularity of user applications is a matter of hard data, not guesswork. Knowing what users do most often, less often and least often means testing resources can be applied with a commensurate amount of force and that testing itself can be patterned after actual usage profiles.

Testers can gain a great deal from taking the user’s point of view and weaving usage concerns into the software testing process. Focusing on the user ensures that high impact bugs are found early and software revisions that break key user scenarios are identified quickly and not allowed to persist.

Apple may be the company in the news today, who knows who it will be tomorrow. Every company that produces software people care about has either been there or will be there. The job is simply too big for perfection to be an option. But there are key advantages we have that make the job manageable.

Put down the stones and make sure that what few blessing we testers possess are being exploited for everything they are worth. Hopefully, your company will be spared and the next time a company suffers such a bug you won't be the one making excuses. Perhaps you'll be lucky enough to be the one saying, "there but for the grace of testing go I."

Friday, July 16, 2010

[Gd] GWT 2.0.4 now available in maven central

| More

Google Web Toolkit Blog: GWT 2.0.4 now available in maven central

I'm pleased to announce that the GWT 2.0.4 jars are now available in the maven central repository. Better maven support has been frequently requested on the issue tracker and mailing list, and this is a first step in that direction. In the future, Google will publish GWT releases to maven central as part of the release process.

The GWT 2.0.4 jars currently in the repository include gwt-user, gwt-dev, and gwt-servlet. Please note that gwt-soyc-vis as a separate jar has been deprecated and soyc (story of your compile) is now bundled with gwt-dev.


[Gd] [Libraries][Update] Dojo 1.5.0

| More

Google AJAX API Alerts: [Libraries][Update] Dojo 1.5.0

Dojo was updated with version 1.5.0

[Gd] Video Sitemaps 101: Making your videos searchable

| More

Official Google Webmaster Central Blog: Video Sitemaps 101: Making your videos searchable

Webmaster Level: All

We know that some of you, or your clients or colleagues, may be new to online video publishing. To make it easier for everyone to understand video indexing and Video Sitemaps, we’ve created a video -- narrated by Nelson Lee, Video Search Product Manager -- that explains everything in basic terms:

Also, last month we wrote about some best practices for getting video content indexed on Google. Today, to help beginners better understand the whys and hows of implementing a Video Sitemap, we added a starting page to the information on Video Sitemaps in the Webmaster Help Center. Please take a look and share your thoughts.

Posted by Amy MacIsaac, Content Partnerships

[Gd] Building Wave Gadgets with GWT

| More

Google Wave Developer Blog: Building Wave Gadgets with GWT

Hilbrand Bouwkamp is an independent internet developer/trainer/presenter specialized in RIA, GWT and Android. He has been following GWT and Wave since it's first release and has created two open source libraries related to GWT and Wave: cobogw and cobogwave.This blogpost is about the cobogwave library.

As soon as Google Wave was released and I had an account, I wanted to write a Wave Gadget. I believe Gadgets are one of the strengths of Wave because they let you add a structured component to an unstructured communication flow to make things more efficient. For example, a simple date picker Gadget can be added to an event-planning wave, and instead of people having to go to a separate site and communicate their date preferences manually, they can do the date-selection in the wave, and all of the information is stored in a single place. There are many situations like this where gadgets can add structure and keep related information together.

Since the Google Wave client itself is build with GWT (Google Web Toolkit), I thought it to be natural to write a Gadget with GWT. To do that, I needed to wrap the Wave Gadgets JavaScript API with my own GWT JSNI wrapper. Like other GWT gadget developers, I wrote my own wrapper - but I wanted to do it in a way that other developers could benefit from. So, I made sure that my wrapper included all of the Wave Gadgets API functionality, I wrote documentation for it, and I open-sourced it under the Apache 2 license as the cobogwave library. Now, other developers can skip the wrapper-writing step and simply focus on writing their gadget.

The cobogwave library makes it very easy to build gadgets for Wave. Just like the iGoogle GWT Gadgets API library, it defines a Needs interface: NeedsWave. By implementing the interface, you can make your gadget code Wave-enabled. Or, you can simply extend the WaveGadget class for the same effect.

GWT developers are accustomed to work with handlers, so the cobogwave library provides much of its functionality via handlers. For example, you can register for the ModeChangeEvent to be notified when the user changes from playback to edit mode, ParticipantUpdateEvent when a new user is added or removed to the wave, and StateUpdateEvent when the gadget receives a new state. The cobogwave library also has support for experimental functionallity in the Wave Gadget library, like the Wave UI Widgets Button, Frame and Dialog.

Recently, I was involved in the latest release of the gadgets support in the Google API Libraries for GWT. In the new version, it's much easier to implement RPC calls to your own server, and this method also works for Wave Gadgets.

Here's a sampling of diverse range of gadgets that developers have built using the cobogwave library:

  • Shortyz: Lets you solve crosswords together in a wave - ported from Android code.

  • MindMap: Lets you create an interactive mindmap, and favorite nodes up or down.

  • Karma: Lets participants in a Wave rate each other.

  • Pongy: Lets you play the classic game "Pong" with a fellow Wave user. (This last gadget was written by me to showcase the highly interactive possibilities of Wave.)

To start building your own Wave Gadgets with GWT, visit the cobogwave project page and to keep informed on updates, follow me on Twitter.

Posted by Hilbrand Bouwkamp, Community Developer

Thursday, July 15, 2010

[Gd] Market Statistics Adjustments

| More

Android Developers Blog: Market Statistics Adjustments

If you look closely today, you'll notice that some per-app Android Market statistics have lower values; not big differences, but noticeable in a few cases. We discovered last week that, starting in early June, certain events had been double-counted: installs, uninstalls, impressions, and so on. The most obvious symptom was (for paid apps) a discrepancy between the number of installs and the number of reported sales through Checkout.

The underlying problem has been corrected and following data repair, the reported statistics should now be accurate. Our apologies for the glitch.


[Gd] Android Market Welcomes Korea!

| More

Android Developers Blog: Android Market Welcomes Korea!

As of today, Android Market is open for business to application buyers in the Republic of Korea. We hope that this will make the outstanding Android devices now available in that nation even more useful and fun. We welcome the people of Korea, acknowledged everywhere as one of the world's most-wired societies, to the world of Android.

[Gd] Dev Channel Update

| More

Google Chrome Releases: Dev Channel Update

The Dev channel has been updated to 6.0.466.0 for Windows and Linux

  • Late binding enabled for SSL sockets:  High priority SSL requests are now always sent to the server first.
  • The extension api “chrome.idle” has moved out of experimental and now has its own permission: “idle”.
  • Fixed crash with SPNEGO authentication on intranet sites.
  • Flickering favicons on Ubuntu Maverick should be fixed.  (There are other graphical glitches, but those also appear in other apps, so that appears to not be our bug.)
  • Content settings window now uses a list instead of tabs.
  • Remove unnecessary MIMEType field from application shortcuts.
More details about additional changes are available in the svn log of all revision.

You can find out about getting on the Dev channel here:

If you find new issues, please let us know by filing a bug at

Anthony Laforge
Google Chrome

[Gd] Nature chooses OpenSocial

| More

OpenSocial API Blog: Nature chooses OpenSocial

We’d like to share a little bit about why we adopted the OpenSocial platform for our new online service, called Workbench (, but first we should probably fill in some details about who we are.

Founded in 1869 as a vehicle for reporting the grand results of scientific work and discovery, Nature ( is one of the most distinguished scientific publishers in the world. Nature exists to be a scientific communications company, and back in 1869 the best way to achieve that goal was to publish a periodical. Nowadays, if you were to start with that mission you would most likely start on the Internet.

Nature Network ( is our domain-specific network for researchers. Our intention with Nature Network is to not only provide a platform where scientists can create an online presence and interact with each other, but also to bring interesting information and functionality to scientists. There are a few advantages to having a domain-specific network over a generic network. First of all, it allows the emergence of community norms, i.e. tone of voice, that are fit for that specific community. Also, by creating and hosting our own network, we have the chance to connect the research literature with the social activity around that literature. We have started, for example, to link our academic papers to blog posts that mention those papers, and we have many plans going forward to extend these kinds of connections.

While working on upgrading our site over the last year, we decided to build in an API and so became interested in OpenSocial. Obviously, since Nature Network promotes social connections, OpenSocial was a contender straight away. We wanted to be able to host interesting functionality on our site and the gadget specification is perfect for that.

Our redevelopment process happened in Java, and again that was a great fit with the Shindig project. Being able to see an active developer community around the standard and around the reference implementation was also a major factor in our decision. Our developers found it easy to integrate Shindig with the code that we were creating and they did this quickly as an early prototype. Seeing our wireframes turn into a live demo was a great moment - it was at the point that we took the decision to go with OpenSocial.

We are planning to open our gadget platform and API to external developers so that all the exciting activity that is happening within Nature Network finds its way to anyone who is interested.

To summarise, we wanted to be able to create an API and offer gadgets to our users. OpenSocial provided a great answer to both of those needs. As we continue to evolve and add capability to the Workbench, we are planning to open our gadget platform and API to external developers so that we can build an ecosystem based on open standards around all the exciting activity that is happening within Nature Network. The ongoing development and great documentation that exists have given us confidence in the project and we are excited by the direction that OpenSocial is heading.

Please feel free to drop us a note with your questions. Contact us via e-mail, network at
Nature Network Team

Nature Network is the professional networking website for scientists around the world. It’s an online meeting place where you and your colleagues can gather, share and discuss ideas, and keep in touch. It’s also where you can consult the community for answers to scientific questions or offer your expertise to help others. Additionally, using the Workbench, you can collate your online scientific tools together in a customizable workspace, allowing you to group your most important tools and information in the way that works best for you.

Posted on behalf of the Nature Network Team by Mark Weitzel, Secretary, OpenSocial Foundation


[Gd] [Search][Release] Fixed Custom Search element refinement tabs error.

| More

Google AJAX API Alerts: [Search][Release] Fixed Custom Search element refinement tabs error.

Fixed the bug involving display of refinement tabs in the Custom Search element, as described here:

[Gd] Behind the Scenes of the Wave API Python Client Library

| More

Google Wave Developer Blog: Behind the Scenes of the Wave API Python Client Library

When I heard that Australia was going to have its very own PyCon, I knew I wanted to give a talk. While working on the with the Wave APIs over the last year, I've gotten to the point where I'm using the Python client library on a daily basis, and I've learnt a lot about Python from our library. I wanted to give a talk that would be interesting both to Wave API developers and to Python developers and would force me to dig deeper into the depths of our client library.

So, I presented a talk called "Wave Robots API: Behind the Scenes", with the goal of showing how we used Python to abstract on top of our HTTP API. I started with an overview of Google Wave and a quick look at Wave's core technology — the conversation model and operational transformation algorithm — so that everyone in the room would be comfortable with me talking about blips, wavelets, operations, and the like. Then I went deep into the robots API, explaining the JSON-RPC protocol between the Wave server and robots, and showing how the Python client library serializes the JSON into Python objects, how it lets developers register for events, and how it signs outgoing requests using OAuth. I then explained how we designed the client library to be hosting-provider-agnostic, and live demoed a robot that I created using the Django framework on a slicehost node. I finished with a summary of the most important features of the client library — versioning, automation, authentication, flexibility, and being Pythonic.

But, hey, if all that sounds interesting to you, you don't have to read about it -- you can watch it! Check out the video here, and the slides here. If you have any questions after watching, just head over to our Google Wave API forum.

Posted by Pamela Fox, Developer Relations

Wednesday, July 14, 2010

[Gd] GwtRpcCommLayer - Extending GWT RPC to do more than just basic serialization

| More

Google Web Toolkit Blog: GwtRpcCommLayer - Extending GWT RPC to do more than just basic serialization

Would you like to have unit tests for your server code which exercise the same call sites which are used by your users? Would you like to do batch processing using these same RemoteService APIs, without having to drive a web browser? If so, this article is for you. Jeff McHugh, from Segue Development, has offered to share his work, which builds on the RPC capabilities that ship with GWT.

Extending GWT RPC to do more than just basic serialization

If are you currently working with GWT and require serialization, this article might be a great interest to you. GWT RPC is an API provided by the GWT SDK that makes client/server communication easy to implement and support. I developed GwtRpcCommLayer, a library that sits on top of of GWT RPC, that:

  1. makes performance-testing (a.k.a. stress-testing) simple to execute without any changes to your existing code

  2. makes unit-testing possible from within a command-line scripting framework such as Maven or Ant

  3. makes it very possible to have a “web service” type interface to your backend server, but instead of having to restrict access to just browsers, the very same servlet code can support Java clients implemented in Swing, AWT, or other servlets

This post will explain how GwtRpcCommLayer makes this possible. The entire codebase is available for download and can be used with your GWT projects.


As you already know, the GWT SDK contains a very powerful serialization mechanism that can be easily leveraged to make AJAX RPC calls. This serialization mechanism is called GWT RPC. The API reflects the overall spirit of Java's RPC mechanism, which allows you to call remote objects' methods just as if they were running within the local JVM. In either case, there's no need to handle any of the marshaling/un-marshaling (serialization/de-serialization) details. That is all taken care of by the underlying framework of the API.

What makes GWT RPC so powerful is that it enables Java developers to be more productive, by allowing them to stay grounded with their existing implementation model, i.e. no conversions to and from JSON, XML, CSV. If you already have objects in your model such as UserAccount, LineItem, etc., you don't need to spend the extra effort building a mapping to and from JSON or XML.

Taking advantage of GWT RPC requires no additional libraries outside of the standard GWT SDK, and it is straightforward to use, making it a great solution for serialization.


My extension to GWT RPC takes advantage of the underlying foundation of how GWT RPC is designed, but swaps out the standard GWT “serialization engine” with one that uses simple Object serialization via

The design of the code-base is naturally separated into two distinct parts:

  • Client side code – code executes within a JVM (replacing the JavaScript Engine), which could be a command-line app, Swing/AWT app, ANT task, intermediate servlet, etc.

  • Server side code – code executes on your server, within your servlet container, and fits seamlessly within your existing servlet code

The following requirements were part of the design goals when I started working on the project:

  1. The syntax of usage should exactly mirror the calls the developer is already using in his/her client side application. Examine the following client-side code:
    UserAccount ua = gwtRpcServerImpl.getUserAccount('');

    when using GwtRpcCommLayer on the client side, things should work the same:
    UserAccount ua = gwtRpcCommLayerServerImpl.getUserAccount('');

  2. The footprint on the server side must be as close to invisible as possible. The existing server methods had to be left untouched.

  3. The developer must be able to utilize this codebase without making any changes to existing servlet code. GwtRpcCommLayer could be added retro-actively, without any need for servlet modification.

All three of these requirements were met, which will hopefully encourage developers to not only try this library out for themselves, but perhaps even contribute to the development of further enhancements and functionality.

How it works

The basic premise of the design follows the same design pattern used by GWT RPC, but instead of using a delimited/ASCII format, I simply swapped that out with Java's standard Object serialization. Since both GWT RPC and Java RPC mandate that Objects intended to be serialized must implement the interface, the process was straightforward.

Additionally, I took took advantage of the Java Reflection API which allows for runtime compilation of “proxy” classes. Using reflection, it is possible to shield the developer from any awareness of the “serialization engine”, which satisfied one of my three requirements: purity of syntax.

Below is a very basic diagram of (#1) how the GWT RPC interaction works and (#2) how the GwtRpcCommLayer interaction works. Keep in mind that from the developer's point of view, nothing changes in terms of how the code executes, both on the client and server side.

Server Implementation

In order to take advantage of GwtRpcCommLayer, you can choose one of two options. Both options work well. The choice comes down to style more than anything else.

Option 1

Create an instance of GwtRpcCommLayerServlet in your web.xml file. Specify your own servlet as an initialization parameter:


Choose option 1 if you don't want to alter your servlet classes.

Option 2

Servlet classes which extend should instead extend gwtrpccommlayer.server.GwtRpcCommLayerServlet. Since GwtRpcCommLayerServlet itself extends
RemoteServiceServlet, all the same methods are available to your servlet and your server code will continue to operate normally. Upon review of the associated source code, you will observe that the main service method has been overridden to provide branch logic to handle both normal GWT RPCs and the newly supported GwtRpcCommLayer RPCs.

Client Implementation

Client implementation is simply a matter of instantiating a “proxy class” and making the same remote service calls your client code would normally make. Here is a simple example:

//STEP 1: create the URL that points to your service/servlet
URL url = new URL("");

//STEP 2: create an instance of GwtRpcCommLayerClient
GwtRpcCommLayerClient client = new GwtRpcCommLayerClient(url);

//STEP 3: ask the client for a reference to the client-side proxy of your remote service
GreetingService stub =
(GreetingService) client.createRemoteServicePojoProxy(GreetingService.class);

//STEP 4: any call you execute against this proxy will ultimately execute on your servlet
String echoResponse = stub.greetServer("hello world");

As you can see from the above fictitious service, it only takes 3 lines of code to gain access to your remote service and it will behave just as if these were regular GWT RPC calls.


  • Encapsulate the above calls in some more sophisticated threading and you should be able to see how easy it is to create a “stress test” for your server code.

  • Encapsulate the above calls into unit tests and you can suddenly perform a client/server unit test for each and every exposed remote method.

  • It should also be obvious that this class lends itself to doing backend processing. Perhaps you have the need to do batch processing using a CSV file. Instead of having to develop another interface (or web service), you could instead simply use your existing codebase and existing servers.

  • As time advances for a particular project, you might develop the need to expose some of these services to a client other than a browser. Your codebase can easily serve as a starting point for that, hopefully cutting down on your implementation time.


Tuesday, July 13, 2010

[Gd] Introduction to ReportDefinitionService

| More

AdWords API Blog: Introduction to ReportDefinitionService

Reporting is an integral part of most AdWords API applications. To help you create and manage reports related to your AdWords campaigns, we introduced the ReportDefinitionService in the v201003 version of the AdWords API. In this post, we’ll cover the basics of working with the ReportDefinitionService. If you’ve used the v13 ReportService, you’ll notice that the new ReportDefinitionService differs in many ways from its v13 counterpart.

Retrieving the report fields

To create a report in the v201003 version of AdWords API, you pick a report type of your choice and then retrieve the list of supported fields using getReportFields. The Java code below shows how to retrieve the report fields:

// Get report fields.
ReportDefinitionField[] reportDefinitionFields = 
// Display report fields.
System.out.println("Available fields for report:");
for (ReportDefinitionField reportDefinitionField : reportDefinitionFields) {
  System.out.print("\t" + reportDefinitionField.getFieldName() + "("
      + reportDefinitionField.getFieldType() + ")");

This feature is quite convenient if you want to display the list of supported fields for a user and allow the users to pick the desired fields. For those who prefer static documentation like in v13, we are working on it, and it will be made available in the near future.

Defining the report

To create a report in the v201003 version of AdWords API, you have to create a report definition first. This is different from the v13 version of API, where you could schedule a report without creating any definitions first. The following Java code snippet creates a report definition:

// Create ad group predicate.
Predicate adGroupPredicate = new Predicate();
adGroupPredicate.setValues(new String[] {adGroupId});
// Create selector.
Selector selector = new Selector();
selector.setFields(new String[] {"AdGroupId", "Id", "KeywordText",
    "KeywordMatchType", "Impressions", "Clicks", "Cost"});
selector.setPredicates(new Predicate[] {adGroupPredicate});
selector.setDateRange(new DateRange(startDate, endDate));
// Create report definition.
ReportDefinition reportDefinition = new ReportDefinition();
reportDefinition.setReportName("Keywords performance report”);
// Create operations.
ReportDefinitionOperation operation = new ReportDefinitionOperation();
ReportDefinitionOperation[] operations = new ReportDefinitionOperation[]
// Add report definition.
ReportDefinition[] result = reportDefinitionService.mutate(operations);

When successful, the API call creates a report definition under the report section of your account’s Control Panel and Library. You may delete or modify a report definition using the mutate method, with operators as REMOVE and SET respectively. You can also retrieve all report definitions in your account using the get method.

If you’d like to inexpensively validate the report definition before adding it, you can call the mutate method as shown above with the validateOnly header set. This works similar to the validateReportJob method in v13 version of AdWords API. You can refer to our earlier blog post for details on how to use validateOnly headers to validate API calls.

The v201003 version of ReportDefinitionService introduces a new feature called predicates. You can use predicates to filter the results by any field with canFilter=true.. You can also use various operators to define the filtering condition. This is an improvement over v13 where the data could only be filtered using certain predefined fields like campaignId or adGroupId.

The v201003 version of ReportDefinitionService allows you to generate reports for predefined date ranges. This is very useful if you need to download a predefined type of report on a regular basis. For instance, you can create a single report definition with dateRangeType as YESTERDAY, and then use that report definition to download a daily report. This is an improvement over v13 where you needed to schedule a new report every day to accomplish the same task. If you need to download reports for a custom period, you can set dateRangeType as CUSTOM_DATE.

The v201003 ReportDefinitionService currently has some limitations: it lacks support for aggregation types and cross client reports. We are working to add support for these features in a future version of the AdWords API.

Generating and downloading a report

To download a report in the v201003 version of AdWords API, you need to issue a HTTP GET to You can get the report definition id from the result returned by the server when you add a report definition as shown in the code sample above. In addition, you need to specify authToken and clientLogin (or clientCustomerId) as http headers to authorize the report download. Note that the clientLogin or clientCustomerId must be the same as the one used while creating the report definition. The sample Java code below downloads a report:

String url = ""
    + reportDefinitionId;
 HttpURLConnection urlConn = 
    (HttpURLConnection) new URL(url).openConnection();
urlConn.setRequestMethod ("GET");
urlConn.setRequestProperty("Authorization", "GoogleLogin auth="
    + user.getRegisteredAuthToken());
if (user.getClientCustomerId() != null) {
  urlConn.setRequestProperty("clientCustomerId", user.getClientCustomerId());
} else if (user.getClientEmail() != null) {
  urlConn.setRequestProperty("clientEmail", user.getClientEmail());
} else {
  urlConn.setRequestProperty("clientEmail", user.getEmail());
copyStream(urlConn.getInputStream(), new FileOutputStream(
    new File(outputFileName)));

The raw HTTP message when executing this code is as shown below:

GET /api/adwords/reportdownload?__rd=XXXXXX HTTP/1.1
Accept: */*
Authorization: GoogleLogin auth=XXXXXX
clientEmail: XXXXXX

A major difference between this approach and the v13 approach is that the report generation is inherently synchronous, unlike in v13 where you had to poll regularly using getReportJobStatus to see if a report job completed, and later use getReportDownloadUrl to get the report download url. Also, the report generation in v201003 is much faster than in v13, and most reports complete in a few seconds. The request to download reports will time out after 3 minutes, so we recommend that you use gzipped csv format to minimize transfer size of potentially large reports. GZipped XML format is not yet supported, but we’re working to include this feature in a future version of the AdWords API.

We've included support for ReportDefinitionService in all of our client libraries to help get you started, so please try it out and share your feedback with us on the forum or the projects' issue trackers.

-- Anash P. Oommen, AdWords API Team

Monday, July 12, 2010

[Gd] Open, Integrated and Giving You Choice: The Story Behind the Google Apps Marketplace

| More

Google Code Blog: Open, Integrated and Giving You Choice: The Story Behind the Google Apps Marketplace

Gmail, Calendar, Docs, Sites and all Google Apps were designed as cloud-based services from day one.  Google’s web-centric approach allows any application to work seamlessly on any device with a browser, allowing users to work when, where, and how they want. No more need for constant upgrades, security patches and bug fixes required by client based software.

Given the first step to the cloud for many businesses and schools is Gmail, the Google Apps Marketplace aims to make it easier for organizations that have “gone Google” to take the next step and take fuller advantage of the cloud by running even more of their infrastructure on cloud-based apps, from hundreds of software companies.

These software companies agree the web-centric approach is the way to go, and are building their applications on web-based architectures and open standards like OpenID for Single Sign-On and OAuth for data access.  Marketplace developers build their applications using the technologies and hosting platform they prefer.  Want to build using Java?  Great.  Ruby or PHP?  Fine with us.  .NET?  Sure, the Marketplace supports that too.  These apps are then hosted on developers’ own servers, on Amazon EC2, on Google’s App Engine, or on any other cloud hosting service.  As developers, they don’t need to worry about proprietary tools, vendor lock-in, or proprietary cloud architecture lock-in, and as Google Apps customers, you’ll even find apps that compete with Google products such as SlideRocket presentations and Zoho CRM, giving you the maximum possible choice.

The key advantage of Marketplace apps, however, is their integration with Google Apps.  All installable Marketplace apps feature single sign-on with Google Apps, and most go beyond that to incorporate specific features that help you accomplish everyday tasks more easily in combination with Google’s applications.  Here is a tiny sampling of Marketplace apps that integrate with various Google Apps:

Gmail -- Manymoon is an online project management tool that make it easy to turn emails from team mates or customers directly into tasks in your projects.  Kwaga Context and Awayfind are two productivity apps that help you manage your conversations directly in your Gmail inbox, helping keep you more productive.

Spreadsheets -- Sliderocket let’s you connect media-rich presentations to live data in Google Spreadsheets, so your presentation always display the most up to date charts and graphs, and    Smartsheet let’s you extend Google Spreadsheets with Gantt tracking and customer management features to empower your sales teams.

Calendar -- and Timebridge are meeting management tools that make it easier to set up and conduct meetings with partners and customers who use different calendaring systems.

Sites -- RunMyProcess let’s you embed custom business process workflows into Google Sites, so each part of an organization can more easily access business process that effect their daily work.

Talk -- Atlassian integrates Jira Studio with Google Talk, so your software development team can stay up to date with the latest build status and team conversations from within Jira Studio, all in real time.

There are hundreds more business applications available on the Marketplace for every aspect of your business.  Find CRM apps, Admin tools, Document Management apps, Productivity apps, and many more.

Every week more cloud-based business applications are added. If you can’t find an app you want please post a suggestion.

By Don Doge, Google Apps Team

[Gd] Sharing the Joy of Creating Android Apps with Everyone

| More

Google Code Blog: Sharing the Joy of Creating Android Apps with Everyone

Sharing the joy of building software with someone that doesn’t have an engineering background is hard. Today it got a little easier with App Inventor for Android.

App Inventor for Android is a Google Labs project that makes it possible to create complex Android applications without having to write any code. This is because, instead of writing code, you can visually design the way the app looks and use blocks to specify behavior.

This helps introduce concepts about logic and programming in a compelling way, without getting lost in syntax and code. And while App Inventor for Android doesn’t have every feature available in the latest Android SDK, it has been used to create some very compelling applications.

For more information about how to participate, take a look at the announcement on the Google Blog.

We look forward to seeing what you think and hearing about your stories. And, yes, the irony of writing a Google Code blog post about avoiding the need to code is not lost on me. :-)

App Inventor for Android is possible due to some significant work done in research on education computing both inside and outside Google. The brainchild of Hal Abelson (visiting faculty), App Inventor for Android is an effort to see if the nature of introductory computing can be changed.

By Ali Pasha, Google Developer Programs


[Gd] .NET Data API SDK updated

| More

Google Code Blog: .NET Data API SDK updated

We are proud to announce a new release of the Google Data API .NET SDK.

This new release, version 1.6, adds support for the latest Contacts and Documents services, as well as support for Google Analytics. It also sports a very easy to use ResumableUpload component to support those gigantic YouTube Videos that you are dying to upload, as well as other services that support this feature, like Google Documents.

For a complete list of changes and bugfixes:

To download this release:
(it comes in versions for Windows, Mono and Windows Mobile).

If you want to report bugs or request features:

Happy coding

By Frank Mantek, Google Developer Team

[Gd] How to have your (Cup)cake and eat it too

| More

Android Developers Blog: How to have your (Cup)cake and eat it too

[This post is by Adam Powell, his second touchy-feely outing in just a few weeks. I asked him to send me a better picture than we ran last time, and got this in response. Photo by our own Romain Guy. — Tim Bray]

Android developers concerned with targeting every last device with their apps are no doubt familiar with this chart:

On July 1, 2010 this was the breakdown of active devices running different versions of the Android platform. With all of the new platform features added to the Android SDK in each version, this chart has many developers shouting the F-word when they are forced to choose between integrating newer platform features and providing their app to the widest possible audience.

Savvy Android developers already know that these two options aren’t really mutually exclusive, but that straddling between them can be painful. In this post I’m going to show you that it doesn’t have to be that way.

Several weeks ago we took a look at how to handle multitouch on Android 2.0 (Eclair) and above, and by the end we had a simple demo app. That app uses features exclusive to Android 2.2 (Froyo) which as of this writing hasn’t had a chance to reach many devices yet. In this post we’re going to refactor that demo to run on devices all the way back to Android 1.5 (Cupcake). If you’d like to follow along, start off by grabbing the code in the trunk of the android-touchexample project on Google Code.

The problem manifests

The uses-sdk tag in your AndroidManifest.xml can specify both a minSdkVersion and a targetSdkVersion. You can use this to declare that while your app is prepared to run on an older version of the platform, it knows about newer versions. Your app can now build against newer SDKs. However, if your code accesses newer platform functionality directly you will probably see something like this in the system log of devices running an older version of Android:

E/dalvikvm(  792): Could not find method android.view.MotionEvent.getX, referenced from method
W/dalvikvm( 792): VFY: unable to resolve virtual method 17: Landroid/view/MotionEvent;.getX (I)F
W/dalvikvm( 792): VFY: rejecting opcode 0x6e at 0x0006
W/dalvikvm( 792): VFY: rejected Lcom/example/android/touchexample/TouchExampleView;.onTouchEvent (Landroid/view/MotionEvent;)Z
W/dalvikvm( 792): Verifier rejected class Lcom/example/android/touchexample/TouchExampleView;
D/AndroidRuntime( 792): Shutting down VM
W/dalvikvm( 792): threadid=3: thread exiting with uncaught exception (group=0x4000fe70)

We broke the contract of minSdkVersion, and here is the result. When we build our app against SDK 8 (Froyo) but declare minSdkVersion="3" (Cupcake) we promise the system that we know what we’re doing and we won’t try to access anything that doesn’t exist. If we mess this up, we see the above, and our users see an ugly error message.

Cue a lot of frustrated users and one-star ratings on Market. We need a safe way of accessing newer platform functionality without making the verifier angry on older platform versions.

Stop and reflect

Many Android developers are already familiar with the practice of accomplishing this through reflection. Reflection lets your code interface with the runtime, detect when certain methods or classes are present, and invoke or instantiate them without touching them directly.

The prospect of querying each platform feature individually and conditionally invoking it using reflection isn’t pretty. It’s ugly. It’s slow. It’s cumbersome. Most of all, heavy use can turn your app’s codebase into an unmaintainable mess. What if I said there is a way to write Android apps that target Android 1.5 (Cupcake) through 2.2 (Froyo) and beyond with a single codebase and no reflection at all?

Lazy Loading

Computer science researcher Bill Pugh published and popularized a method of writing singletons in Java that takes advantage of the laziness of ClassLoaders. Wikipedia explains his solution further. The code looks like this:

public class Singleton {
// Private constructor prevents instantiation from other classes
private Singleton() {}

* SingletonHolder is loaded on the first execution of Singleton.getInstance()
* or the first access to SingletonHolder.INSTANCE, not before.
private static class SingletonHolder {
private static final Singleton INSTANCE = new Singleton();

public static Singleton getInstance() {
return SingletonHolder.INSTANCE;

There is a very important guaranteed behavior at work here explained by the comment above SingletonHolder. Java classes are loaded and initialized on first access - instantiating the class or accessing one of its static fields or methods for the first time. This is relevant to us because classes are verified by the VM when they are loaded, not before. We now have everything we need to write Android apps that span versions without reflection.

Designing for compatibility

As it turns out this is fairly simple to apply. You generally will want your app to degrade gracefully on older platform versions, dropping features or providing alternate functionality when the platform support isn’t available. Since Android platform features are tied to the API level you have only one axis to consider when designing for compatibility.

In most cases this version support can be expressed as a simple class hierarchy. You can design your app to access version-sensitive functionality through a version-independent interface or abstract class. Subclasses of that interface intended to run on newer platform versions will support newer platform features, and subclasses intended for older versions might need to present alternate ways for your users to access app functionality.

Your app can use a factory method, abstract factory, or other object creation pattern to instantiate the proper subclass at runtime based on the information exposed by android.os.Build.VERSION. This last step insures that the system will never attempt to load a class it can’t verify, preserving compatibility.

The principle in practice

At the beginning of this post I said that we are going to refactor the touch example app from Making Sense of Multitouch to be compatible from API level 3 (Cupcake) on through API level 8 (Froyo). In that post I pointed out that GestureDetectors can be a useful pattern for abstracting the processing of touch events. At the time I didn’t realize how soon that statement would be put to the test. We can refactor the version-specific elements of the demo app’s touch handling into an abstract GestureDetector.

Before we begin the real work, we need to change our manifest to declare that we support API level 3 devices with minSdkVersion in the uses-sdk tag. Keep in mind that we’re still targeting SDK 8, both with targetSdkVersion in our manifest and in our project configuration. Our manifest now looks like this:

<?xml version="1.0" encoding="utf-8"?>
<manifest xmlns:android=""
<application android:icon="@drawable/icon" android:label="@string/app_name">
<activity android:name=".TouchExampleActivity"
<action android:name="android.intent.action.MAIN" />
<category android:name="android.intent.category.LAUNCHER" />
<uses-sdk android:minSdkVersion="3" android:targetSdkVersion="8" />

Our TouchExampleView class isn’t compatible with Android versions prior to Froyo thanks to its use of ScaleGestureDetector, and it isn’t compatible with versions prior to Eclair thanks to its use of the newer MotionEvent methods that return multitouch data. We need to abstract that functionality out into classes that will not be loaded on versions of the platform that don’t support it. To do this, we will create the abstract class VersionedGestureDetector.

The example app allows the user to perform two gestures, drag and scale. VersionedGestureDetector will therefore publish two events to an attached listener, onDrag and onScale. TouchExampleView will obtain a VersionedGestureDetector instance appropriate to the platform version, filter incoming touch events through it, and respond to the resulting onDrag and onScale events accordingly.

The first pass of VersionedGestureDetector looks like this:

public abstract class VersionedGestureDetector {
OnGestureListener mListener;

public abstract boolean onTouchEvent(MotionEvent ev);

public interface OnGestureListener {
public void onDrag(float dx, float dy);
public void onScale(float scaleFactor);

We’ll start with the simplest functionality first, the VersionedGestureDetector for Cupcake. For simplicity’s sake in this example we will implement each version as a private static inner class of VersionedGestureDetector. You can organize this however you please, of course, as long as you use the lazy loading technique shown above or some equivalent. Don’t touch any class that directly accesses functionality not supported by your platform version.

private static class CupcakeDetector extends VersionedGestureDetector {
float mLastTouchX;
float mLastTouchY;

public boolean onTouchEvent(MotionEvent ev) {
switch (ev.getAction()) {
case MotionEvent.ACTION_DOWN: {
mLastTouchX = ev.getX();
mLastTouchY = ev.getY();
case MotionEvent.ACTION_MOVE: {
final float x = ev.getX();
final float y = ev.getY();

mListener.onDrag(x - mLastTouchX, y - mLastTouchY);

mLastTouchX = x;
mLastTouchY = y;
return true;

This simple implementation dispatches onDrag events whenever a pointer is dragged across the touchscreen. The values it passes are the X and Y distances traveled by the pointer.

In Eclair and later we will need to properly track pointer IDs during drags so that our draggable object doesn’t jump around as extra pointers enter and leave the touchscreen. The base implementation of onTouchEvent in CupcakeDetector can handle drag events for us with a few tweaks. We’ll add the methods getActiveX and getActiveY to fetch the appropriate touch coordinates and override them in EclairDetector to get the coordinates from the correct pointer:

private static class CupcakeDetector extends VersionedGestureDetector {
float mLastTouchX;
float mLastTouchY;

float getActiveX(MotionEvent ev) {
return ev.getX();

float getActiveY(MotionEvent ev) {
return ev.getY();

public boolean onTouchEvent(MotionEvent ev) {
switch (ev.getAction()) {
case MotionEvent.ACTION_DOWN: {
mLastTouchX = getActiveX(ev);
mLastTouchY = getActiveY(ev);
case MotionEvent.ACTION_MOVE: {
final float x = getActiveX(ev);
final float y = getActiveY(ev);

mListener.onDrag(x - mLastTouchX, y - mLastTouchY);

mLastTouchX = x;
mLastTouchY = y;
return true;

And now EclairDetector, overriding the new getActiveX and getActiveY methods. Most of this code should be familiar from the original touch example:

private static class EclairDetector extends CupcakeDetector {
private static final int INVALID_POINTER_ID = -1;
private int mActivePointerId = INVALID_POINTER_ID;
private int mActivePointerIndex = 0;

float getActiveX(MotionEvent ev) {
return ev.getX(mActivePointerIndex);

float getActiveY(MotionEvent ev) {
return ev.getY(mActivePointerIndex);

public boolean onTouchEvent(MotionEvent ev) {
final int action = ev.getAction();
switch (action & MotionEvent.ACTION_MASK) {
case MotionEvent.ACTION_DOWN:
mActivePointerId = ev.getPointerId(0);
case MotionEvent.ACTION_CANCEL:
case MotionEvent.ACTION_UP:
mActivePointerId = INVALID_POINTER_ID;
case MotionEvent.ACTION_POINTER_UP:
final int pointerIndex = (ev.getAction() & MotionEvent.ACTION_POINTER_INDEX_MASK)
final int pointerId = ev.getPointerId(pointerIndex);
if (pointerId == mActivePointerId) {
// This was our active pointer going up. Choose a new
// active pointer and adjust accordingly.
final int newPointerIndex = pointerIndex == 0 ? 1 : 0;
mActivePointerId = ev.getPointerId(newPointerIndex);
mLastTouchX = ev.getX(newPointerIndex);
mLastTouchY = ev.getY(newPointerIndex);

mActivePointerIndex = ev.findPointerIndex(mActivePointerId);
return super.onTouchEvent(ev);

EclairDetector calls super.onTouchEvent after determining the active pointer index and lets CupcakeDetector take care of dispatching the drag event. Supporting multiple platform versions doesn’t have to mean code duplication.

Finally, let’s add scale gesture support for Froyo devices that have ScaleGestureDetector. We’ll need a couple more changes to CupcakeDetector first; we don’t want to drag normally while scaling. Some devices have touchscreens that don’t deal well with it, and we would want to handle it differently on devices that do anyway. We’ll add a shouldDrag method to CupcakeDetector that we’ll check before dispatching onDrag events.

The final CupcakeDetector:

private static class CupcakeDetector extends VersionedGestureDetector {
float mLastTouchX;
float mLastTouchY;

float getActiveX(MotionEvent ev) {
return ev.getX();

float getActiveY(MotionEvent ev) {
return ev.getY();

boolean shouldDrag() {
return true;

public boolean onTouchEvent(MotionEvent ev) {
switch (ev.getAction()) {
case MotionEvent.ACTION_DOWN: {
mLastTouchX = getActiveX(ev);
mLastTouchY = getActiveY(ev);
case MotionEvent.ACTION_MOVE: {
final float x = getActiveX(ev);
final float y = getActiveY(ev);

if (shouldDrag()) {
mListener.onDrag(x - mLastTouchX, y - mLastTouchY);

mLastTouchX = x;
mLastTouchY = y;
return true;

EclairDetector remains unchanged. FroyoDetector is below. shouldDrag will return true as long as we do not have a scale gesture in progress:

private static class FroyoDetector extends EclairDetector {
private ScaleGestureDetector mDetector;

public FroyoDetector(Context context) {
mDetector = new ScaleGestureDetector(context,
new ScaleGestureDetector.SimpleOnScaleGestureListener() {
@Override public boolean onScale(ScaleGestureDetector detector) {
return true;

boolean shouldDrag() {
return !mDetector.isInProgress();

public boolean onTouchEvent(MotionEvent ev) {
return super.onTouchEvent(ev);

Now that we have our detector implementations in order we need a way to create them. Let’s add a factory method to VersionedGestureDetector:

public static VersionedGestureDetector newInstance(Context context,
OnGestureListener listener) {
final int sdkVersion = Integer.parseInt(Build.VERSION.SDK);
VersionedGestureDetector detector = null;
if (sdkVersion & Build.VERSION_CODES.ECLAIR) {
detector = new CupcakeDetector();
} else if (sdkVersion < Build.VERSION_CODES.FROYO) {
detector = new EclairDetector();
} else {
detector = new FroyoDetector(context);

detector.mListener = listener;

return detector;

Since we’re targeting Cupcake, we don’t have access to Build.VERSION.SDK_INT yet. We have to parse the now-deprecated Build.VERSION.SDK instead. But why is accessing Build.VERSION_CODES.ECLAIR and Build.VERSION_CODES.FROYO safe? As primitive static final int constants, these are inlined by the compiler at build time.

Our VersionedGestureDetector is ready. Now we just need to hook it up to TouchExampleView, which has become considerably shorter:

public class TouchExampleView extends View {
private Drawable mIcon;
private float mPosX;
private float mPosY;

private VersionedGestureDetector mDetector;
private float mScaleFactor = 1.f;

public TouchExampleView(Context context) {
this(context, null, 0);

public TouchExampleView(Context context, AttributeSet attrs) {
this(context, attrs, 0);

public TouchExampleView(Context context, AttributeSet attrs, int defStyle) {
super(context, attrs, defStyle);
mIcon = context.getResources().getDrawable(R.drawable.icon);
mIcon.setBounds(0, 0, mIcon.getIntrinsicWidth(), mIcon.getIntrinsicHeight());

mDetector = VersionedGestureDetector.newInstance(context, new GestureCallback());

public boolean onTouchEvent(MotionEvent ev) {
return true;

public void onDraw(Canvas canvas) {
canvas.translate(mPosX, mPosY);
canvas.scale(mScaleFactor, mScaleFactor);

private class GestureCallback implements VersionedGestureDetector.OnGestureListener {
public void onDrag(float dx, float dy) {
mPosX += dx;
mPosY += dy;

public void onScale(float scaleFactor) {
mScaleFactor *= scaleFactor;

// Don't let the object get too small or too large.
mScaleFactor = Math.max(0.1f, Math.min(mScaleFactor, 5.0f));


Wrapping up

We’ve now adapted the touch example app to work from Android 1.5 on through the latest and greatest, taking advantage of newer platform features as available without a single reflective call. The same principles shown here can apply to any new Android feature that you want to use while still allowing your app to run on older platform versions:

  • The ClassLoader loads classes lazily and will only load and verify classes on first access.

  • Factor out app functionality that can differ between platform versions with a version-independent interface or abstract class.

  • Instantiate a version-dependent implementation of it based on the platform version detected at runtime. This keeps the ClassLoader from ever touching a class that it will not be able to verify.

To see the final cross-version touch example app, check out the “cupcake” branch of the android-touchexample project on Google Code.

Extra Credit

In this example we didn’t provide another way for pre-Froyo users to zoom since ScaleGestureDetector was only added as a public API for 2.2. For a real app we would want to offer some alternate affordance to users. Traditionally Android offers a set of small tappable zoom buttons along the bottom of the screen. The ZoomControls and ZoomButtonsController classes in the framework can help you present these controls to the user in a standard way. Implementing this is left as an exercise for the reader.