Friday, September 4, 2009

[Gd] App Engine Launcher for Windows

| More

Google App Engine Blog: App Engine Launcher for Windows

As recently announced on the Google App Engine Blog, the 1.2.5 SDK for Python now includes a GUI for creating, running, and deploying App Engine applications when developing on Windows. We call this the Google App Engine Launcher.

About a year ago, a few of us recognized a need for a client tool to help with App Engine development. In our 20% time, a we wrote a launcher for the Mac. Of course, not all App Engine developers have Macs, so more work was needed. Thus, a new crew of 20%ers set off to write a launcher for our App Engine developers on Windows. Although Google is spread out across many offices around the world, it is surprisingly easy to connect with passionate engineers. For example, this new launcher for Windows has contributions from Dave Symonds in Australia, Mark Dalrymple on the east coast, and more engineers here in Mountain View.

The Windows launcher is written in Python and uses wxPython for its GUI. This means (with a little care) the launcher should work on Linux, and we'd like Linux developers to have the option of using it. Although we ship a binary of the Launcher for Windows (thanks to py2exe), shipping binaries for Linux is a bit more challenging. Fortunately, Google has a well-traveled path for solving this problem. For example, Google O3D provides binaries for Windows/Mac; it also provides source code and instructions for building on Linux. Thus inspired, we've open sourced the Windows launcher so that developers can use it on other platforms.

The goal of the launcher is to help make App Engine development quick and easy. There may be other tasks you'd like to integrate (e.g. run tests, re-encode images before deploying, etc) and with the launcher now open sourced, you can add them! We look forward to seeing contributions from the community.

We have also started the process of open sourcing the Mac version of the launcher. The source code is now available; however, it references some missing Google libraries, so it won't yet compile in its current state. Fortunately, those libraries have also been open sourced, so it will be possible to get things up and running using entirely open source code. I'll be using more of my 20% time to clean up the Mac launcher project in the coming weeks.

We hope the launcher will improve the workflow for App Engine developers. We also hope the source code will enable developers to adapt it to their needs, just as we do on Chrome, my main project. Finally, I am proud to continue a tradition of openness which began with my very first project at Google.

-- John Grabowski, Software Engineer

Let us know how the launcher works for you.

Open Source Code for the App Engine Launcher: for Windows and Linux, and for Mac OS X.

Screenshot of Google App Engine Launcher for Windows

[Gd] Dev Channel Updated with fixes and extension changes

| More

Google Chrome Releases: Dev Channel Updated with fixes and extension changes

The dev channels have been updated to

  • All Platforms
    • [r24663Closing the download shelf removes all completed and cancelled downloads from it. (Issue: 15712)
    • [r24331Fixes various audio/video events which were not firing. (Issues: 2015216768)
      • [r24519] Saved passwords for proxy servers are now correctly labeled. (Issue: 12992)
      • [r24384] Add single line of tips to New New Tab Page.  (Issue: 19162)
    • Mac
      • [r24241HTTP Auth dialog autofills passwords.
      • New Tab Page displays much faster. (Issue 13337)
      • [r23722r23955] Improved scrolling and display performance, particularly on machines without powerful graphics hardware (such as laptops)
      • [r24621] Plugins starting offscreen will draw correctly when they scroll into view (Issue 20234)
    • Linux
      • [r24241HTTP Auth dialog autofills passwords.
      • [r24558] Fix the find bar so the match count is inside the entry. (Issue: 17962)
      • [r24831] Now respects both GNOME and KDE proxy settings. (Issue: 17363)
      • [r24930] Implemented "Confirm form resubmission" dialog. (Issue: 19761)
      • [r24454] Don't paste primary selection when middle clicking scrollbars. (Issue: 16400)
      • [r24287] Fix inability to select Times New Roman in font options with some versions of Pango. (Issue: 19823)
      • [r24903r25007] Fixed tab dragging on 64-bit. (Issue: 20513)
      • [r25039] Fixed 64-bit JavaScript crash on some CPUs. (Issue: 20789)
    • Extensions
      • Two breaking changes (see mailing list post for more information):
        • [r24816] Enforce granular permissions
        • [r24770] Modified several APIs to be more consistent
      • [r24539] Polish the look of Linux extension shelf. (Issue: 16759)
      • [r24599] Polish extension install UI.
      • [r24864] Allow extension toolstrip to detach. (ctrl+alt+b)
      • [r24871r24877] Polish chrome://extensions/ page. Add convenience developer tools to load an extension and pack an extension.

          More details about additional changes are available in the svn log of all revisions.

          You can find out about getting on the Dev channel here:

          If you find new issues, please let us know by filing a bug at

          Jonathan Conradt
          Engineering Program Manager


          Thursday, September 3, 2009

          [Gd] Gmail for Mobile HTML5 Series: Reducing Startup Latency

          | More

          Google Code Blog: Gmail for Mobile HTML5 Series: Reducing Startup Latency

          On April 7th, Google launched a new version of Gmail for mobile for iPhone and Android-powered devices. We shared the behind-the-scenes story through this blog and decided to share more of what we've learned in a brief series of follow-up blog posts. This week, I'll talk about how modularization can be used to greatly reduce the startup latency of a web app.

          To a user, the startup latency of an HTML 5 based application is critical. It is their first impression of the application's performance. If it's really slow, they might not even bother to wait for the app to load before navigating away. Even if your application is blazing fast after it loads, the user may never get the chance to experience it.

          There are several aspects of an HTML 5 based application that contribute to startup latency:
          1. Network time to fetch the application (JavaScript + HTML)
          2. JavaScript parse time
          3. Code execution time to fetch the data and render the home page of your application
          The third issue is up to you! The first two issues, however, are directly correlated with the size of the application. This is a tricky problem since as your application matures, it will have more features and the code size will get bigger. So, what to do? Modularize your application! Split up your code into independent, standalone modules. Consider splitting each view/screen of your application and implement each new feature as its own module. This is only half the story. Now that you have your code modularized, you need to decide which subset of these modules are critical to load your application's home page. All the non-core modules should be downloaded and parsed at a later time. With a consistent code size for your startup code, you can maintain a consistent startup time. Now, let's go into some nitty gritty details of how we built an application with lazy-loaded modules.

          How to Spilt Your Code into Modules

          Splitting an application into individual modules might not be as simple as you think. Code that serves a common purpose/functionality should be grouped together and form a module (comparable to a library). As mentioned earlier, we selected which modules are critical to the home page of the app and which modules can be lazy-loaded at a later time. Let's use a Weather application as an example:

          High Level Functionality:
          • A "Weather in my Favourite Cities" home page
          • Click on a city to view the cities entire week forecast
          • Weather data comes from an external web service
          Possible Module Separation:
          • Weather data model
          • Weather web service API
          • Common UI widgets (buttons, toolbars, navigation, etc)
          • Favourite Cities page
          • City Weather Forecast page
          Now let's say your users want a "breaking news" feature. No problem: just put the page, the news data API and the data model into a new module.

          One thing to keep in mind is the dependency order of your modules. For modules that have many downstream dependencies, it might make sense to include them as part of the core modules.

          How to Lazy Load the Modules

          Option 1: Script as DOM

          This method uses JavaScript to insert SCRIPT tags into the HEAD's DOM.
          <script type="text/JavaScript">
            function loadFile(url) {
              var script = document.createElement('SCRIPT');
              script.src = url;
          Option 2: XmlHttpRequest (XHR)

          This method sets up XmlHttpRequests to retrieve the JavaScript . The returned string should be evaluated in the XHR callbacks (using the eval(string) method). This method is a little more complicated but it gives you more control over error handling.
          <script type="text/JavaScript">
            function loadFile(url) {
               function callback() {
                if (req.readyState == 4) { // 4 = Loaded
                  if (req.status == 200) {
                  } else {
                    // Error
              var req = new XMLHttpRequest();
              req.onreadystatechange = callback;
    "GET", url, true);
          The next question is, when to lazy load the modules? One strategy is to lazy load the modules in the background once the home page has been loaded. This approach has some drawbacks. First, JavaScript execution in the browser is single threaded. So while you are loading the modules in the background, the rest of your app becomes non-responsive to user actions while the modules load. Second, it's very difficult to decide when, and in what order, to load the modules. What if a user tries to access a feature/page you have yet to lazy load in the background? A better strategy is to associate the loading of a module with a user's action. Typically, user actions are associated with an invocation of an asynchronous function (for example, an onclick handler). This is the perfect time for you to lazy load the module since the code will have to be fetched over the network. If mobile networks are slow, you can adopt a strategy where you prefetch the code of the modules in advance and keep them stored in the javascript heap. Only then parse and load the corresponding module on user action. One word of caution is that you should make sure your prefetching strategy doesn't impact the user's experience - for example, don't prefetch all the modules while you are fetching user data. Remember, dividing up the latency has far better for users than bunching it all together during startup.

          For an HTML 5 application that takes advantage of the application cache to reduce startup latency and to serve the application offline, there are a few caveats one should be aware of. Mobile networks have decent bandwidth, but poor round trip latency, so listing each module as a separate resource in the manifest incurs quite a bit of extra startup latency when the application cache is empty. Also, if one of the module resources fails to be downloaded by the application cache (e.g. disconnected from network), additional error handling code needs to be written to handle such a case. Finally, applications today have no control when the application cache decides to download the resources in the manifest (such a feature is not defined in the current specification of the draft standard). Typically, resources are downloaded once the main page is loaded, but that's not an ideal time since that's when the application requests user data.

          To work-around these caveats, we found a trick that allows you to bundle all of your modules into a single resource without having to parse any of the JavaScript. Of course, with this strategy, there is greater latency with the initial download of the single resource (since it has all your JavaScript modules), but once the resource is stored in the browser's application cache, this issue becomes much less of a factor.

          To combine all modules into a single resource, we wrote each module into a separate script tag and hid the code inside a comment block (/* */). When the resource first loads, none of the code is parsed since it is commented out. To load a module, find the DOM element for the corresponding script tag, strip out the comment block, and eval() the code. If the web app supports XHTML, this trick is even more elegant as the modules can be hidden inside a CDATA tag instead of a script tag. An added bonus is the ability to lazy load your modules synchronously since there's no longer a need to fetch the modules asynchronously over the network.

          On an iPhone 2.2 device, 200k of JavaScript held within a block comment adds 240ms during page load, whereas 200k of JavaScript that is parsed during page load added 2600 ms. That's more than a 10x reduction in startup latency by eliminating 200k of unneeded JavaScript during page load! Take a look at the code sample below to see how this is done.
          <script id="lazy">
          // Make sure you strip out (or replace) comment blocks in your JavaScript first.
          JavaScript of lazy module

            function lazyLoad() {
              var lazyElement = document.getElementById('lazy');
              var lazyElementBody = lazyElement.innerHTML;
              var jsCode = stripOutCommentBlock(lazyElementBody);

          <div onclick=lazyLoad()> Lazy Load </div>
          In the future, we hope that the HTML5 standard will allow more control over when the application cache should download resources in the manifest, since using comments to pass along code is not elegant but worked nicely for us. In addition, the snippets of code are not meant to be a reference implementation and one should consider many additional optimizations such as stripping white space and compiling the JavaScript to make its parsing and execution faster. To learn more about web performance, get tips and tricks to improve the speed of your web applications and to download tools, please visit

          Previous posts from Gmail for Mobile HTML5 Series

          HTML5 and Webkit pave the way for mobile web applications
          Using AppCache to launch offline - Part 1
          Using AppCache to launch offline - Part 2
          Using AppCache to launch offline - Part 3
          A Common API for Web Storage
          Suggestions for better performance
          Cache pattern for offline HTML5 web application

          By Bikin Chiu, Software Engineer, Google Mobile

          [Gd] Some News from Android Market

          | More

          Android Developers Blog: Some News from Android Market

          I'm pleased to let you know about several updates to Android Market. First, we will soon introduce new features in Android Market for Android 1.6 that will improve the overall experience for users. As part of this change, developers will be able to provide screenshots, promotional icons and descriptions that will better show off applications and games.

          We have also added four new sub-categories for applications: sports, health, themes, and comics. Developers can now choose these sub-categories for both new and existing applications via the publisher website. Finally, we have added seller support for developers in Italy. Italian developers can go to the publisher website to upload applications and target any of the countries where paid applications are currently available to users.

          To take advantage of the upcoming Android Market refresh, we encourage you to visit the Android Market publisher website and upload additional marketing assets. Check out the video below for some of the highlights.


          [Gd] App Engine SDK 1.2.5 released for Python and Java, now with XMPP support

          | More

          Google App Engine Blog: App Engine SDK 1.2.5 released for Python and Java, now with XMPP support

          Today we are releasing version 1.2.5 of the App Engine SDK for both Python and Java, our first simultaneous release across both runtimes. We're excited about the great new functionality in this release ... including XMPP!

          XMPP Support

          XMPP (or Jabber as it is sometimes known) is an open standard for communicating in real-time (instant messaging). One of the most popular API requests in the App Engine issue tracker has been support for XMPP, so today we are excited to mark that issue closed with the release of our new XMPP API for both Python and Java SDKs!

          Like the other APIs that App Engine provides for developers, XMPP is built on the same powerful infrastructure that serves other Google products. In this case, we take advantage of the servers that run Google Talk. This new API allows your app to exchange messages with users on any XMPP-based network, including (but not limited to!) Google Talk. If you're currently participating in the Google Wave developer preview, you can also use the XMPP API to build bots that interact with your waves.

          We've tried to make the XMPP API as simple as possible to incorporate into your existing Python or Java applications. We use the same webhook pattern that Cron and Task Queue already use: you send outgoing messages with an API call; you receive incoming messages as an HTTP POST. You can read more about the features the XMPP API in our documentation (Python, Java).

          We're very proud of our first XMPP release, but there's still more work to do. In the future we hope to provide even more functionality to apps, such as user status (presence) and info on new subscriptions. If you have particular requests or feedback, please let us know.

          Task Queue API for Java

          Python developers have been processing tasks offline using App Engine Task Queues since mid-June, but until now the feature was not available in the App Engine for Java SDK. The 1.2.5 SDK now includes support for creating Tasks and Queues in our Java runtime.

          If you're familiar with the Python Task Queue API, the Java version will look very familiar. We use the same webhooks pattern as with Cron (and now XMPP). The API provides a simple pattern for creating tasks, assigning them a payload and a worker, and inserting them into queues for scheduling and processing. There's lots of potential with the Task Queue API, so make sure to check out the Java Task Queue Documentation for more details.

          Raising Limits for the Task Queue API

          With the 1.2.5 release, we are increasing the daily quota for Task Queue insertions to 100K for billing-enabled apps. Ultimately, we will raise the quota for both free and billing-enabled apps, but we hope this intermediate step opens up new scenarios for our developers using Task Queues.

          New App Engine Launcher for Windows

          Last but not least, we're very excited that 1.2.5 for Python now includes a Windows-based version of a useful tool that Mac OS X users have been enjoying for sometime: The Google App Engine Launcher!

          Screenshot of Google App Engine Launcher for Windows

          This tool simplifies the process of creating new Python projects, testing them locally, and uploading them to the App Engine servers. In addition, we're releasing the source code for both Mac and Windows App Engine Launchers as open source projects. Watch this space for more details on where you can find the source, and how Linux developers can use the Launcher as well.

          1.2.5 also includes the usual set of bug fixes, tweaks, and API polish. For a more detailed look at all the things that have changed this release, take a look at our release notes and, as always, let us know what you think!


          [Gd] A New FTP Implementation Goes Live

          | More

          Chromium Blog: A New FTP Implementation Goes Live

          Starting in the Dev channel release, we are using our new FTP implementation by default on Windows. (It was already enabled by default on Linux and Mac.) This switchover is an important milestone in the development of our network stack. We'd like to acknowledge two Chromium contributors who made this possible.

          The new FTP implementation was initially written by Ibrar Ahmed single–handedly. It was a long journey for him because he worked on it in his spare time. Ibrar has a master's degree in computer science from International Islamic University. After working as software engineer and associate architect at other companies, he recently started his own tele-medicine company. We thank Ibrar for his contribution to the Chromium network stack!

          Paweł Hajdan Jr. started to work on the new FTP code in July as one of his summer intern projects at Google. Paweł added new unit tests, fixed bugs and compatibility issues, and is taking the lead in bringing the new FTP code to production quality.

          Finally, we used Mozilla code for parsing and formatting FTP directory listings (ParseFTPList.cpp), which was originally written by Cyrus Patel.

          In the near term, the original WinInet-based FTP implementation will still be available as an option on Windows. Specify the --wininet-ftp command-line option to enable it. (The original --new-ftp option is now obsolete and ignored.) During this period we will fix FTP bugs only in the new FTP implementation. When we're happy with the quality of the new FTP code, we will remove the original WinInet-based implementation, finally eliminating our dependency on WinInet.

          Please help us achieve that goal by testing FTP with a Dev channel release and filing bug reports. Follow these guidelines when reporting bugs:
          • Please don't add a comment like "Here is another URL that doesn't work for me" to a bug. Always open a new bug, and give a link to another bug if you think they are similar.
          • Make the steps to reproduce as detailed as possible, and always include the version number of Chrome.
          • Check if the problem can be reproduced with --wininet-ftp on Windows and include that information in the bug report.
          Posted by Wan-Teh Chang, Software Engineer

          Wednesday, September 2, 2009

          [Gd] Hydro4GE — a PaaS built with GWT

          | More

          Google Web Toolkit Blog: Hydro4GE — a PaaS built with GWT


          From time to time we like to share experiences from fellow developers with you. It's a pleasure to present you today with this guest blog post by Geoff Speicher, Chief System Architect of Hydro4GE.


          Hydro4GE (pronounced
          hy-dro-forge) is a Platform-as-a-Service (PaaS) for building online
          database applications. Building a powerful tool for developers
          itself requires a powerful toolkit to meet developers' expectations
          of a development environment: a highly-interactive, rich user
          interface (UI) with emerging features such as database schema
          visualization via interactive, scalable graphics. This article
          describes our experience in using GWT to rewrite our old HTML+AJAX
          UI to deliver all of these features with a polished look and solid

          Building the UI

          When we set out to rewrite the UI using GWT, a quick inventory
          of available widgets was in order. Our initial reaction was similar
          to what some others have expressed: that the native GWT widget
          library did not have quite the same breadth or flair as some
          third-party libraries such as ExtJS or SmartGWT. However,
          experimentation with these libraries proved that they are also
          fairly heavy in weight, and noticeably impacted the application's
          performance both in its initial download and its interactive

          The reason is simple: these generic JavaScript libraries, though
          highly optimized, are still just that — generic. It
          is unrealistic for these libraries to achieve the same level of
          efficiency as code that is produced by the GWT compiler. For this
          reason, we wanted to avoid the use of external JavaScript libraries
          when possible.

          After some careful consideration and a more detailed analysis
          of our needs, we discovered that we only needed a small handful of widgets that
          GWT didn't already provide, and all but one of those (covered in the next section)
          were easily built using Composite. There is already an
          blog entry for building Composites
          , so we will not cover that topic here.
          The visual styling of both the native and Composite widgets was easy
          to customize thanks to GWT's liberal and logical use of CSS classnames.

          As a quick demonstration, it's pretty impressive what a difference
          you can achieve by building a few simple Composite widgets and
          tweaking some of the default styles. Compare the default styling
          of an input form with one modified by a few simple customizations:

          GWT Default Styles & Widgets

          Customized for Hydro4GE

          If you have not already jumped on the bandwagon, Firefox+Firebug
          is an indispensable tool for inspecting HTML and tweaking CSS.
          Thanks to this and the flexibility of the GWT library, we were able
          to achieve the polished look that we wanted without much work and
          without sacrificing performance.

          Building a Widget from an External Library

          What about the one widget that we couldn't build using Composite?
          We want to use vector graphics to generate scalable diagrams depicting
          database structure and user interaction for systems you build with
          Hydro4GE. There are a small handful of vector graphics libraries
          out there for GWT, but all of them require concessions that we are
          not willing or able to make. The Google Web Toolkit Incubator's
          does not support text rendering (due to a limitation of HTML canvas),
          and although projects such as abstractcanvas
          attempt to overcome this limitation, text cannot be rotated and
          precisely scaled.

          In this section, we will show you how to integrate with Raphael, a lightweight JavaScript
          library for cross-platform vector graphics. Raphael side-steps the
          HTML canvas issue by using SVG on supported platforms, and Microsoft
          VML on Internet Explorer. Raphael does everything we need to build
          our diagrams, except for one thing: integrate directly with our GWT

          We achieved this integration through two levels of abstraction: (1) a
          JavaScript Overlay Type
          to provide a zero-overhead interface to the underlying JavaScript API,
          and (2) a
          GWT Widget
          to wrap the overlay with a more Java-friendly API, resulting in a first-class Widget that will operate
          side-by-side with native GWT Widgets. Let's have a look at the
          details for this two-part implementation, starting with the
          Overlay class.

          The Overlay

          The RaphaelJS class is nearly an exact replica of the underlying
          Raphael API. This is made necessary by the restrictions that GWT
          enforces on JavaScriptObject types, so the implementation is fairly uncreative:
          one method per Raphael method. Many of these methods appear in the nested
          class Shape, which represents the type returned by most of the native Raphael
          methods. The basic idea for the class is:

          class RaphaelJS extends JavaScriptObject {
          protected static class Shape extends JavaScriptObject {
          public final native Shape rotate(double degree, boolean abs) /*-{
          return this.rotate(degree, abs);
          public final native Shape scale(double sx, double sy) /*-{
          return this.scale(sx, sy);
          public final native Shape translate(double dx, double dy) /*-{
          return this.translate(dx, dy);
          // ...

          * factory method
          static public final native RaphaelJS create(Element e,
          int w, int h) /*-{
          return $wnd.Raphael(e, w, h);

          public final native Element node() /*-{
          return this.node;

          public final native Shape circle(double x, double y, double r) /*-{

          public final native Shape rect(double x, double y,
          double w, double h) /*-{
          return this.rect(x, y, w, h);

          // ...

          The Widget

          The Overlay bridges the gap between GWT and Javacript, but the
          Widget is what truly makes the library useful to our application.
          Since the Widget does not have the restrictions of the JavaScriptObject
          Overlay, we have the freedom to define our own API as an adaptor
          to the Overlay.

          public class Raphael extends Widget {
          private RaphaelJS overlay;

          public class Shape extends Widget {
          protected RaphaelJS.Shape rs;
          protected Shape(RaphaelJS.Shape s) {
          rs = s;
          public Shape rotate(double degree, boolean isAbsolute) {
          rs.rotate(degree, isAbsolute);
          return this;
          public Shape scale(double sx, double sy) {
          rs.scale(sx, sy);
          return this;
          public Shape translate(double dx, double dy) {
          rs.translate(dx, dy);
          return this;
          // ...

          public class Circle extends Shape {
          public Circle(double x, double y, double r) {
          super(, y, r));

          public class Rectangle extends Shape {
          public Rectangle(double x, double y, double w, double h) {
          super(overlay.rect(x, y, w, h));
          public Rectangle(double x, double y, double w, double h, double r) {
          super(overlay.rect(x, y, w, h, r));

          public class Text extends Shape {
          public Text(double x, double y, String str) {
          super(overlay.text(x, y, str));

          // ...

          public Raphael(int width, int height) {
          Element raphaelDiv = DOM.createDiv();
          overlay = RaphaelJS.create(raphaelDiv, width, height);

          This implementation defines separate classes that represent the
          different types of objects (Circle, Text, Rectangle) developers can
          append to a drawing. A possible alternative implementation might
          simply expose the underlying JavaScript API as a native GWT widget
          — in the end, you can write an API that suits your needs.

          We chose this implementation because it allows us to implement
          scalable drawings through inheritance, resulting in custom classes that
          can be instantiated and appended to any Panel. For example, to
          create a fullscreen drawing that contains a single, centered circle
          of radius 20:

          public class MyDrawing extends Raphael {
          public MyDrawing(int width, int height) {
          super(width, height);
          Circle c = new Circle(width/2, height/2, 20);
          // Raphael automatically appends the Circle to this drawing

          public class MyApp implements EntryPoint {
          public void onModuleLoad() {
          MyDrawing d = new MyDrawing(Window.getClientWidth(),

          This is a trivial example, but it clearly demonstrates the simplicity
          that can be achieved by integrating an external JavaScript library
          as a Widget. To the consumers of this library, there is no difference
          between using it (a third-party Javascript library) and the native
          GWT widget library. That's a powerful statement, and a testament
          to the GWT design team.

          Communicating with the server

          With the UI visually complete, it was time to move on to integration
          with the backend. Our existing backend was implemented in PHP, and
          we did not want to rewrite it in Java just to support GWT-RPC.
          Without native support for GWT-RPC calls to a PHP backend, we needed
          a flexible and efficient communication framework to accomplish the
          equivalent task. We chose JSON for its simplicity and solid support
          in both GWT and PHP, but ultimately we learned that regardless of
          the efficiency of the encoding, it's still easy to make design
          mistakes that will lead to inefficient communications.

          First, we need to get the client and server talking to each other.
          Making JSON-encoded client requests from GWT is made trivial by using the
          class in conjunction with
          and handling JSON on the server with PHP is easily accomplished using Zend Framework,
          which provides the Zend_Json
          class for encoding/decoding JSON, and
          for handling and routing requests. This provides the framework we
          need to tie into our PHP code running on the server.

          Having addressed the encoding and handling of requests, our
          attention turned to dealing with the size and frequency of requests.
          The whole point of making AJAX requests was to transfer little bits
          of data instead of re-sending an entire HTML document, but we found
          a break-even point where the payload became so small that the request
          frequency was the limiting factor. At one point, we had gotten so
          carried away breaking up information into atomic pieces that the
          overhead of each request was exceeding 90% of the total time to
          complete the request, leaving less than 10% of the time to actually
          process and transfer the payload.

          The reason this can happen is pretty straightforward. Over any
          network connection, there is a minimum amount of time necessary for
          the HTTP request/response to live its lifecycle. This is compounded
          by the two-connection browser limit plus overhead imposed by the
          network stack and connection latency. The net effect of all this
          is that if you dispatch one hundred requests in rapid succession,
          each of which contains only a few hundred bytes of payload data,
          it would take nearly one hundred times as long to complete all one
          hundred requests as it would take to transfer the entire payload
          (tens of kilobytes) in one request. In terms of real-world figures,
          this translates to a 300 or 400 millisecond time for completion,
          as opposed to 30 or 40 seconds!

          This lesson reminds us that when you write software, you must
          always design for the limitations of the platform, even when you
          have great tools at your disposal. In this case, to write efficient
          web software with GWT, you need to make every HTTP request reach
          its fullest potential by maximizing its effects.

          We achieved this by modeling each request as a collection of
          atomic commands. In this simple model, each command can return a
          set of data, and is associated with one or more response data
          handlers. The commands are processed on the server within a database
          transaction so that the entire request either succeeds or fails as
          a single unit without leaving the database in an inconsistent

          This abstraction allows us to queue up many physical database
          operations into one logical JSON request, not only improving
          performance but also allowing us to handle the results in a modular
          way that involves less code than a typical GWT request callback.
          In this sense we have actually managed to reduce the complexity of
          the asynchronous HTTP request for this specialized case. For
          example, take the code that handles selection of user roles from
          the Hydro4GE database:

            public class SelectRoleHandler implements DatabaseRequest.Handler {
          public void handle(DatabaseRequest.Result result) {
          for (int row=0; row < result.getRowCount(); row++) {
          JSONObject data_row = result.getRowValues(row);
          Role r = new Role(data_row);
          // process role...

          Compare this to the code that you would typically write for an
          HTTP request callback:

            public class SelectRoleCallback implements RequestCallback {
          public void onError(Request request, Throwable exception) {
          Window.alert("Couldn't retrieve JSON");

          public void onResponseReceived(Request request, Response response) {
          if (200 == response.getStatusCode()) {
          JSONArray data_set =
          if (data_set != null) {
          for (int row=0; row < data_set.size(); row++) {
          JSONObject data_row = data_set.get(row).isObject();
          Role r = new Role(data_row);
          // process role...
          } else {
          Window.alert("Couldn't retrieve JSON ("
          + response.getStatusText() + ")");

          Besides being easier to understand (and half as many lines!),
          the Handler code has the advantage over the Callback in that multiple
          independent handlers can be attached to a single request in order
          to achieve separation of responsibility in handling each response.
          In addition, each handler is isolated from the results of other
          commands in the same request, so that there is no confusion over
          which results you are handling. This example does not even introduce
          the complexity of handling multiple command responses in the typical
          RequestCallback code, but the Handler code supports it implicitly
          by its nature.

          These concepts can and should be applied to GWT-RPC or any other
          communications encoding. The implementation details are different,
          but the spirit is the same: make every HTTP request really count.


          The techniques I have demonstrated here can be used
          equally effectively in both new and existing projects. Whether
          you are trying to build a slick UI, wrap an external library, or
          talk to a server, I have only scratched the surface of what is
          possible with GWT. I am especially looking forward to GWT 2.0 for
          continued improvements on some of the topics covered here, made
          possible by some exciting upcoming features such as UiBinder and

          In short, I am pretty excited about GWT and Hydro4GE, and I hope
          you are too. Be sure to check out the sneak peek of Hydro4GE to see a
          demo of our GWT UI in action!

          Geoffrey C. Speicher, MS is a Software Engineer at Software Engineering
          and the Chief System Architect of Hydro4GE.


          [Gd] The 7th Plague and Beyond

          | More

          Google Testing Blog: The 7th Plague and Beyond

          By James Whittaker

          Sorry I haven't followed up on this, let the excuse parade begin: A) My new book just came out and I have spent a lot of time corresponding with readers. B) I have taken on leadership of some new projects including the testing of Chrome and Chrome OS (yes you will hear more about these projects right here in the future). C) I've gotten just short of 100 emails suggesting the 7th plague and that takes time to sort through.

          This is clearly one plague-ridden industry (and, no, I am not talking about my book!)

          I've thrown out many of them that deal with a specific organization or person who just doesn't take testing seriously enough. Things like the Plague of Apathy (suggested exactly 17 times!) just doesn't fit. This isn't an industry plague, it's a personal/group plague. If you don't care about quality, please do us all a favor and get out of the software business. Go screw someone else's industry up, we have enough organic problems we have to deal with. I also didn't put down the Plague of the Deluded Developer (suggested by various names 22 times) because it dealt with developers that as a Googler I no longer have to deal with ... those who think they never write bugs. Our developers know better and if I find out exactly where they purchased that clue I will forward the link.

          Here's some of the best. As many of them have multiple suggesters I have credited the persons who were either first or gave the most thoughtful analysis. Feel free, if you are one of these people, to give further details or clarifications in the comments of this post as I am sure these summaries do not do them justice.

          The Plague of Metrics (Nicole Klein, Curtis Pettit plus 18 others): Metrics change behavior and once a tester knows how the measurement works, they test to make themselves look good or say what they want it to say ignoring other more important factors. The metric becomes the goal instead of measuring progress. The distaste for metrics in many of these emails was palpable!

          The Plague of Semantics (Chris LeMesurier plus 3 others): We misuse and overuse terms and people like to assign their own meaning to certain terms. It means that designs and specs are often misunderstood or misinterpreted. This was also called the plague of assumptions by other contributors.

          The Plague of Infinity (Jarod Salamond, Radislav Vasilev and 14 others): The testing problem is so huge it's overwhelming. We spend so much time trying to justify our coverage and explain what we are and are not testing that it takes away from our focus on testing. Every time we take a look at the testing problem we see new risks and new things that need our attention. It randomizes us and stalls our progress. This was also called the plague of endlessness and exhaustion.

          The Plague of Miscommunication (Scott White and 2 others): The language of creation (development) and the language of destruction (testing) are different. Testers write a bug report and the devs don't understand it and cycles have to be spent explaining and reexplaining. A related plague is the lack of communication that causes testers to redo work and tread over the same paths as unit tests, integration tests and even the tests that other testers on the team are performing. This was also called the plague of language (meaning lack of a common one).

          The Plague of Rigidness (Roussi Roussev, Steven Woody, Michele Smith and 5 others): Sticking to the plan/process/procedure no matter what. Test strategy cannot be bottled in such a manner yet process heavy teams often ignore creativity for the sake of process. We stick with the same stale testing ideas product after product, release after release. This was also called the plague of complacency. Roussi suggested a novel twist calling this the success plague where complacency is brought about through success of the product. How can we be wrong when our software was so successful in the market?

          And I have my own 7th Plague that I'll save for the next post. Unless anyone would like to write it for me? It's called the Plague of Entropy. A free book to the person who nails it.


          [Gd] Heavy Duty: What Project Hosting Users are Doing

          | More

          Google Code Blog: Heavy Duty: What Project Hosting Users are Doing

          In July, the Project Hosting team announced the People sub-tab where project members can easily document their duties within their projects.

          Here are the top ten most frequently selected project duties:
          1. Lead by providing a project vision and roadmap
          2. Design new features, write code and unit tests
          3. Design core libraries, write code and unit tests
          4. Have fun hacking and learn new stuff!
          5. Test the system before each release
          6. Review code changes and provide constructive feedback
          7. Plan the scope of release milestones and track progress
          8. Lead the UI design and incorporate feedback
          9. Write end-user documentation and examples
          10. Triage new issues and support requests from end-users

          Those frequent duties are a testament to the serious and thoughtful software development processes often found in open source development. But, open source is not all hard work: our users also decided that it was important to document some of their more colorful duties.  Those ranged from general, "Be awesome," to vicarious, "Watch nervously as students write code," to self-effacing, "Create elaborate unit tests for small corners of the library, write hilariously malformed XML comments, and mercilessly break the build," to simply practical leadership, "Buy the pizza for everyone else."

          Don't skip your duty to write your own! Just click the People sub-tab and start to document what you and your project team are supposed to be doing.

          By Jason Robbins, Google Code Team

          [Gd] Check out Cyworld's developer sandbox

          | More

          OpenSocial API Blog: Check out Cyworld's developer sandbox

          Annyonghaseyo!  This is Kyle Kim from Cyworld with some exciting news!

          We have recently launched Dev.Square a sandbox environment for Cyworld developers.  In celebration of the launch, we had a big announcement conference in Seoul with over 750 people attending (see the photos here).  I'd like to thank everyone who attended the conference as well as those who contacted us after reading our previous post.

          For developers in Korea who are interested in writing applications for Cyworld, we are hosting a workshop on September 4, 2009. Speakers from our partner companies (including Mickey Kim and Chris Schalk from Google) will host sessions on OpenSocial APIs, social gaming, and trends within the social apps industry.  For more details on the workshop, please visit the Dev.Square website.

          The DevSquare website is currently available in Korean and English; Language settings can be changed from the drop-down menu at the bottom of the page.  In order to register as a developer, you should first register as a Nate/Cyworld member.  We apologize for the cumbersome process, but do not miss this opportunity to access our 24 million Cyworld users / 27 million NateOn (IM) users.

          For English comments / questions, please contact Dyne (, and for Korean comments / questions, contact

          We have also created an official Twitter page. Please follow us for frequent updates.

          Posted by Kyle Kim, Cyworld Team

          [Gd] It is not about writing tests, its about writing stories

          | More

          Google Testing Blog: It is not about writing tests, its about writing stories

          I would like to make an analogy between building software and building a car. I know it is imperfect one, as one is about design and the other is about manufacturing, but indulge me, the lessons are very similar.

          A piece of software is like a car. Lets say you would like to test a car, which you are in the process of designing, would you test is by driving it around and making modifications to it, or would you prove your design by testing each component separately? I think that testing all of the corner cases by driving the car around is very difficult, yes if the car drives you know that a lot of things must work (engine, transmission, electronics, etc), but if it does not work you have no idea where to look. However, there are some things which you will have very hard time reproducing in this end-to-end test. For example, it will be very hard for you to see if the car will be able to start in the extreme cold of the north pole, or if the engine will not overheat going full throttle up a sand dune in Sahara. I propose we take the engine out and simulate the load on it in a laboratory.

          We call driving car around an end-to-end test and testing the engine in isolation a unit-test. With unit tests it is much easier to simulate failures and corner cases in a much more controlled environment. We need both tests, but I feel that most developers can only imagine the end-to-end tests.

          But lets see how we could use the tests to design a transmission. But first, little terminology change, lets not call them test, but instead call them stories. They are stories because that is what they tell you about your design. My first story is that:

          • the transmission should allow the output shaft to be locked, move in same direction (D) as the input shaft, move in opposite (R) or move independently (N)

          Given such a story I could easily create a test which would prove that the above story is true for any design submitted to me. What I would most likely get is a transmission which would only have a single gear in each direction. So lets write another story

          • the transmission should allow the ratio between input and output shaft to be [-1, 0, 1, 2, 3, 4]

          Again I can write a test for such a transmission but i have not specified how the forward gear should be chosen, so such a transmission would most likely be permanently stuck in 1st gear and limit my speed, it will also over-rev the engine.

          • the transmission should start in 1st and than switch to higher gear before the engine reaches maximum revolutions.

          This is better, but my transmission would most likely rev the engine to maximum before it would switch, and once it would switch to higher gear and I would slow down, it would not down-shift.

          • the transmission should down shift whenever the engine RPM fall bellow 1000 RPMs

          OK, now it is starting to drive like a car, but still the limits for shifting really are 1000-6000 RPMs which is not very fuel efficient way to drive your car.

          • the transmission should up-shift whenever the estimated fuel consumption at a higher gear ration is better than the current one.

          So now our engine will not rev any more but it will be a lazy car since once the transmission is in the fuel efficient mode it will not want to down-shift

          • the transmission should down-shift whenever the gas pedal is depressed more than 50% and the RPM is lower than the engine's peak output RPM.

          I am not a transmission designer, but I think this is a decent start.

          Notice how I focused on the end result of the transmission rather than on testing specific internals of it. The transmission designer would have a lot of levy in choosing how it worked internally, Once we would have something and we would test it in the real world we could augment these list of stories with additional stories as we discovered additional properties which we would like the transmission to posses.

          If we would decide to change the internal design of the transmission for whatever reason we would have these stories as guides to make sure that we did not forget about anything. The stories represent assumptions which need to be true at all times. Over the lifetime of the component we can collect hundreds of stories which represent equal number of assumption which is built into the system.

          Now imagine that a new designer comes on board and makes a design change which he believes will improve the responsiveness of the transmission, he can do so because the existing stories are not restrictive in how, only it what the outcome should be. The stories save the designer from breaking an existing assumption which was already designed into the transmission.

          Now lets contrast this with how we would test the transmission if it would already be build.

          • test to make sure all of the gears work

          • test to make sure that the engine is not allowed to over-rev

          It is hard now to think about what other tests to write, since we are not using the tests to drive the design. Now, lets say that someone now insist that we get 100% coverage, we open the transmission up and we see all kinds of logic, and rules and we don't know why since we were not part of the design so we write a test

          • at 3000 RPM input shaft, apply 100% throttle and assert that the transmission goes to 2nd gear.

          Tests like that are not very useful when you want to change the design, since you are likely to break the test, without fully understanding why the test was testing that specific conditions, it is hard to know if anything was broken if the tests is red.. That is because the tests does not tell a story any more, it only asserts the current design. It is likely that such a test will be in the way when you will try to do design changes. The point I am trying to make is that there is huge difference between writing tests before or after. When we write tests before we are:

          • creating a story which is forcing a particular design decision.

          • tests are a collection of assumptions which needs to be true at all times.

          when we write tests after the fact we:

          • miss a lot of reasons why things are done in particular way even if we have 100% coverage

          • test are often brittle because they are tied to particulars of the current implementation

          • tests are just snapshots and don't tell a story of why the component does something, only that it does.

          For this reason there are huge differences in quality when writing assumptions as stories before (which force design to emerge) or writing tests after which take a snapshot of a given design.

          Tuesday, September 1, 2009

          [Gd] Tips for News Search

          | More

          Official Google Webmaster Central Blog: Tips for News Search

          Webmaster Level: All

          During my stint on the "How Google Works Tour: Seattle", I heard plenty of questions regarding News Search from esteemed members of the press, such as The Stranger, The Seattle Times and Seattle Weekly. After careful note-taking throughout our conversations, the News team and I compiled this presentation to provide background and FAQs for all publishers interested in Google News Search.

          Along with the FAQs about News Sitemaps and PageRank in the video above, here's additional Q&A to get you started:

          Would adding a city name to my paper—for example, changing our name from "The Times" to "The San Francisco Bay Area Times"—help me target my local audience in News Search?
          No, this won't help News rankings. We extract geography and location information from the article itself (see video). Changing your name to include relevant keywords or adding a local address in your footer won't help you target a specific audience in our News rankings.
          What happens if I accidentally include URLs in my News Sitemap that are older than 72 hours?
          We want only the most recently added URLs in your News Sitemap, as it directs Googlebot to your breaking information. If you include older URLs, no worries (there's no penalty unless you're perceived as maliciously spamming -- this case would be rare, so again, no worries); we just won't include those URLs in our next News crawl.
          To get the full scoop, check out the video!

          Written by Maile Ohye, Developer Programs Tech Lead
          Filmed by Michael Wyszomierski, Search Quality Team

          Monday, August 31, 2009

          [Gd] Dev Channel Update: Updates for Snow Leopard

          | More

          Google Chrome Releases: Dev Channel Update: Updates for Snow Leopard

          The dev channel has been updated with a new release for Mac OS X.  This release includes bug fixes and some improved compatibility with Snow Leopard.  This release is only slightly different than the release so please refer to that release for more information. 


          Changes specific to Mac OS X:

          More details about additional changes are available in the svn log of all revisions.

          You can find out about getting on the Dev channel here:

          If you find new issues, please let us know by filing a bug at

          Jonathan Conradt
          Engineering Program Manager