Saturday, September 4, 2010

[Gd] An overview of the Chrome Web Store Licensing API

| More

Chromium Blog: An overview of the Chrome Web Store Licensing API

We recently released a developer preview of the Chrome Web Store, which included new documentation about our upcoming payments and licensing API. With this blog post, we wanted to share a quick overview and some tips about this API so that you can start developing your apps with it.

The Chrome Web Store will offer a built-in payments system that allows you to charge for apps, making it easy for users to pay without leaving the store. If you want to work with this payments system in your apps, you can use the Chrome Web Store Licensing API to verify whether a given user has paid and should have access to your app. Here’s how the API works:

The Licensing API has two inputs: the app ID and the user ID. The app ID is a unique identifier that’s assigned to each item uploaded to the store. You can see it most easily in the URL of your detail page—for example, .../detail/aihcahmgecmbnbcchbopgniflfhgnkff.

The user ID is the OpenID URL corresponding to the user’s Google Account. You can get the OpenID URL for the current user either by using Google App Engine’s built-in OpenID support or by using a standard OpenID library and Google’s OpenID endpoint.

Given the app ID and the user ID, you make Licensing API requests using this URI:<appID>/<userID>

When your app makes an HTTP request to the Licensing API, the app needs to be authenticated. The app is authenticated by matching your Google Account that uploaded the app to the Google Account used to call the API.

There are a few ways the app can indicate the Google Account used to make the API call. For the Chrome Web Store Licensing API, we highly recommend the use of OAuth for Web Applications. In this approach, OAuth access tokens are used to identify the Google Account calling the API.

You can obtain the necessary token via the Chrome Developer Dashboard by clicking the “AuthToken” link for your app. (This link appears only if your app uses Chrome Web Store Payments.) You’ll need this OAuth token to sign the HTTP requests to call the Licensing API. The best way to sign your requests is with a standard OAuth library.

The OAuth tokens that the Chrome Developer Dashboard provides are limited in scope, which means that they can only be used to make Licensing API calls. They can’t be used to make calls to other authenticated Google APIs or for anything else.

Once you’re ready to make authenticated calls, give the API a try by making your first request. For more information read the Licensing API docs, try out the Getting Started tutorial, check out the samples, and watch the video below:

Note that current version of the Licensing API is a stub, which means that it doesn’t return live data that’s based on purchases just yet. Instead, it returns dummy responses that you can use to verify the various scenarios of your implementation. However the protocol, response format, and URL endpoints of the API are all final, so your implementation shouldn’t need to change before the final launch of the store.

We look forward to receiving your feedback on the current Licensing API implementation at our developer discussion group.

Posted by Munjal Doshi, Software Engineer

[Gd] New ways to view Webmaster Tools messages

| More

Official Google Webmaster Central Blog: New ways to view Webmaster Tools messages

Webmaster Level: All

Now there’s a new way to see just the messages for a specific site. A new Messages feature will appear on all site pages. The feature is just like the Message Center on the home page, except it‘ll show only messages for the currently selected site. This gives you more freedom to choose how you want to view your messages: either for all your sites or for just one site at a time.

Alerts (formally known as SiteNotice messages) will now be more prominent in the Message Center. These messages tell you about significant changes we’ve noticed related to your site which may indicate serious problems. For instance, alerts may warn you about an increase in crawl errors, an increase in 404 errors, or about possible outages. With their newfound prominence comes a new name: what used to be “SiteNotice messages” will now simply be known as “alerts.”

Messages containing alerts will be marked with an icon to make them quickly distinguishable from other messages. Each site’s Dashboard will display a notification whenever the site has unread alerts. The Dashboard notification will lead to the new site Message Center with a filter enabled to show only alerts for the current site.

You can also enable the alerts filter yourself. On the home page, enabling the alerts filter across all your sites is a great way to see alerts you may have missed and may help you find problems common across multiple sites. Even with these changes we recommend you use the email forwarding feature to receive these important alerts without having to visit Webmaster Tools.

We hope these new features make it easier to manage your messages. If you have any questions, please post them in our Webmaster Help Forum or leave your comments below.

Written by Steve Geluso, Software Engineering Intern

Friday, September 3, 2010

[Gd] New Sidewiki “Sidebar” web element

| More

Google Code Blog: New Sidewiki “Sidebar” web element

We are very pleased to announce a new Sidewiki “sidebar” web element. Google Sidewiki allows visitors to your website to contribute helpful information and read other visitors’ insights alongside the pages of the website. The new web element is a Sidewiki button, which, when clicked, displays a fully functional Sidewiki sidebar to the left of the page content. This means that your visitors can see the Sidewiki content for your page even if they don’t have Google Toolbar or the Sidewiki Chrome extension installed.

You can choose from several different look and feels created by Google or even create a new custom one. Use our wizard to choose the desired look and behavior, embed the generated code in your page, and you’re done. Here's a sketch of what it looks like when a visitor is looking at the Sidewiki content.

Go to to get started. If you'll be using the element on your site, we’d love to hear about it via @googlesidewiki on Twitter.

By Roman Shuvaev, Sidewiki Team

[Gd] Deep dive articles for the Analytics Data Export API

| More

Google Code Blog: Deep dive articles for the Analytics Data Export API

(Cross-posted from Google Analytics Blog)

On the Google Analytics API Team, we’re fascinated with what people create using the Data Export API. You guys come up with some really amazing stuff! Lately, we’ve also been paying a lot of attention to how people use it. We looked at whether the API has stumbling points (and where they are), what common features every developer wants in their GA applications, and what tricky areas need deeper explanations than we can give by replying to posts in our discussion group.

As a result of identifying these areas, we’ve written a few in-depth articles. Each article is meant as a “Deep Dive” into a specific topic, and is paired with open-source, sample reference code.

In no particular order, the articles are as follows:

Visualizing Google Analytics Data with Google Chart Tools
This article describes how you can use JavaScript to pull data from the Export API to dynamically create and embed chart images in a web page. To do this, it shows you how to use the Data Export API and Google Chart Tools to create visualizations of your Google Analytics Data.

Outputting Data from the Data Export API to CSV Format
If you use Google Analytics, chances are that your data eventually makes its way into a spreadsheet. This article shows you how to automate all the manual work by printing data from the Data Export API in CSV, the most ubiquitous file format for table data.

Filling in Missing Values In Date Requests
If you want to request data displayed over a time series, you will find that there might be missing dates in your series requests. When requesting multiple dimensions, the Data Export API only returns entries for dates that have collected data. This can lead to missing dates in a time series, but this article describes how to fill in these missing dates.

We think this article format makes for a perfect jumping off point. Download the code, follow along in the article, and when you’re done absorbing the material, treat the code as a starting point and hack away to see what you can come up with!

And if you’ve got some more ideas for areas you’d like us to expound upon, let us know!

By Alexander Lucas, Google Analytics API Team

[Gd] An Ingredients List for Testing - Part Three

| More

Google Testing Blog: An Ingredients List for Testing - Part Three

By James Whittaker

Possessing a bill of materials means that we understand the overall size of the testing problem. Unfortunately, the size of most testing problems far outstrips any reasonable level of effort to solve them. And not all of the testing surface is equally important. There are certain features that simple require more testing than others. Some prioritization must take place. What components must get tested? What features simply cannot fail? What features make up the user scenarios that simply must work?

In our experience it is the unfortunate case that no one really agrees on the answers to these questions. Talk to product planners and you may get a different assessment than if you talk to developers, sales people or executive visionaries. Even users may differ among themselves. It falls on testers to act as the user advocates and find out how to take into account all these concerns to prioritize how testing resources will be distributed across the entire testing surface.

The term commonly used for this practice is risk analysis and at Google we take information from all the projects stakeholders to come up with overall numerical risk scores for each feature. How do we get all the stakeholders involved? That's actually the easy part. All you need to do is assign numbers and then step back and have everyone tell you how wrong you are. We've found being visibly wrong is the best way to get people involved in the hopes they can influence getting the numbers right! Right now we are collecting this information in spreadsheets. By the time GTAC rolls around the tool we are using for this should be in a demonstrable form.

[Gd] First impressions matter: Ideas to improve your app’s ‘out of the box experience’

| More

Google Apps Developer Blog: First impressions matter: Ideas to improve your app’s ‘out of the box experience’

This is the first in a series of posts looking at the varied integration options with Google Apps and the Google Apps Marketplace. With that in mind, what better place to start off than users’ first experiences with your application -- the initial setup and login process.

The more work an application requires just to get started, the more opportunities there are for administrators and users to abandon the application and try something else or give way to complacency. Yet simply thrusting users into an app with no help to get started can be equally as frustrating. Finding the right balance can be tricky, but giving users the right choices at the right times can make all the difference in converting users into productive and happy customers.

Get critical information up front

One of the first opportunities to engage users is when they’re installing your application. With a simple additional <link> in your manifest, you can easily alter the installation flow and bring the administrator to your site while you have their attention. This is the perfect time to gather whatever information your application requires before basic functionality can be enabled, particularly if that information is best handled by an administrator or business owner.

Let’s take a look at how we would build this out with our SaasyVoice demonstration application we wrote for our talk at Google I/O. Here’s a quick mockup of what an administrator would see when they click through to setup the application.

There are a few important things to point out here.
  1. We want to keep administrators focused and guide them through the setup so they complete it the first time through, so it helps to set expectations up front.
  2. We’d like to make sure our existing customers can take advantage of the integration with Google Apps and make it clear how they should proceed (we’ll leave the details behind that link for another post!)
  3. We still need to ask for some information about the company, but we want to be careful not to ask for things we can otherwise discover for ourselves. To get information like the administrators name and email address, we simply authenticate them with OpenID prior to displaying the page and ask for the attributes we need.

Make provisioning users easy

During this step of the install, its helpful to guide the administrator through properly configuring the app for the end users. For some applications, this could be assigning the appropriate roles to managers and employees or, in the case of SaasyVoice, assigning the user a phone extension. To accomplish this, Marketplace Apps can take advantage of the provisioning API to discover user and groups for a domain and learn which users are privileged administrators.

Help admins spread the word

Administrators need to do more than just install the application on their domain. Notifying and providing users with the knowledge needed to use the application effectively can be a challenge. Using the same data from the provisioning API, we can help admins get their users up to speed by sending instructions to each user. Of course we want to be good email citizens too and not email users without the administrators consent.

It’s important to remember that this is all happening within the context of the Marketplace’s installation process, and it’s important to return the admin back to Google Apps to complete the process and enable the application for their domain. Give administrators too many choices or too much freedom and there’s a good chance they’ll fall out of that process and leave the application inaccessible to users.

Don’t forget the users!

We’ve covered things well from the administrators side of things, and we were able to help them set up the application quickly by taking advantage of the integration options and data available to us. The last piece to the puzzle is making sure users have an equally positive experience.

Applications that create accounts on demand can use many of the same techniques to minimize setup time for each user. We can also take advantage of a user’s first log in to display important messages and tips to help them get started.

While the examples here are just mock ups, many apps in the Marketplace have already adopted these and other techniques. Our integration guide on has a few examples, or you can try out some apps for yourself in the marketplace. More importantly, developers have learned that investing the time and resources to craft a great first experience for their apps pays off.

Stay tuned for the next in this series on Google Apps integration best practices!

Posted by Steven Bazyl, Google Apps Marketplace Team

Want to weigh in on this topic? Discuss on Buzz


[Gd] Wave open source next steps: "Wave in a Box"

| More

Google Wave Developer Blog: Wave open source next steps: "Wave in a Box"

Since the announcement that we will discontinue development of Google Wave as a standalone product, many people have asked us about the future of the open source code and Wave federation protocol. After spending some time on figuring out our next steps, we'd like to share the plan for our contributions over the coming months.

We will expand upon the 200K lines of code we've already open sourced (detailed at to flesh out the existing example Wave server and web client into a more complete application or "Wave in a Box."

This project will include:

  • an application bundle including a server and web client supporting real-time collaboration using the same structured conversations as the Google Wave system
  • a fast and fully-featured wave panel in the web client with complete support for threaded conversations
  • a persistent wave store and search implementation for the server (building on contributed patches to implement a MongoDB store)
  • refinements to the client-server protocols
  • gadget, robot and data API support
  • support for importing wave data from
  • the ability to federate across other Wave in a Box instances, with some additional configuration
This project will not have the full functionality of Google Wave as you know it today. However, we intend to give developers and enterprising users an opportunity to run wave servers and host waves on their own hardware.

Since the beginning, it has been our vision that the Google Wave protocols could support a new generation of communication and collaboration tools. The response from the developer community to date has been amazing and rewarding. Even more so now, we believe that developers and other projects are a critical part of this story.

While Wave in a Box will be a functional application, the future of Wave will be defined by your contributions. We hope this project will help the Wave developer community continue to grow and evolve. We'll discuss more technical details of our plan on the Wave Protocol Forum, which is the best place to keep up with the latest progress on the open source project and learn how you can contribute.

Wave on

Posted by Alex North, Software Engineer, Google Wave team


Thursday, September 2, 2010

[Gd] Rich snippets: testing tool improvements, breadcrumbs, and events

| More

Official Google Webmaster Central Blog: Rich snippets: testing tool improvements, breadcrumbs, and events

Webmaster Level: All

Since the initial roll-out of rich snippets in 2009, webmasters have shown a great deal of interest in adding markup to their web pages to improve their listings in search results. When webmasters add markup using microdata, microformats, or RDFa, Google is able to understand the content on web pages and show search result snippets that better convey the information on the page. Thanks to steady adoption by webmasters, we now see more than twice as many searches with rich snippets in the results in the US, and a four-fold increase globally, compared to one year ago. Here are three recent product updates.

Testing tool improvements

Despite the healthy adoption rate by webmasters so far, implementing the rich snippets markup correctly can still be a major challenge. To help address this, we’ve added new error messages to the rich snippets testing tool to help you better identify and fix any problems with the markup.

If you’ve added markup in the past but haven’t seen rich snippets appear for your site, we encourage you to take a few minutes to try testing the markup again on the updated testing tool.

Rich snippets markup for breadcrumbs

Last year, Google announced a modification to search results to begin showing site hierarchies (typically referred to as "breadcrumbs") rather than standard URLs in cases where it helped users to better understand a website:

We are now adding support for a Breadcrumbs markup format that allows webmasters to explicitly identify the breadcrumb hierarchy on their pages.

If the breadcrumbs UI is already showing for your site, we'll continue to show it even if you don't do the markup, so don't worry about any existing UI disappearing. Note that this new format is experimental. Based on feedback and on other available standards, this format may be modified or replaced in the future. As with other rich snippet types, while markup helps us to better understand the content on your site, it does not guarantee that the breadcrumbs UI will be shown for your web pages in search results.


In January, we added support for rich snippets for events. If a web page containing events listings showed up in search results, up to three links to specific events could be shown in the search result snippet.

This works well for general queries like [concerts in seattle], but we also wanted to improve the search experience when searching for a specific event. We will now show rich snippets when pages containing a single event show up in search results. Single event rich snippets now contain the date and location of the event:

For instructions on adding events markup, refer to the events page in the rich snippets documentation.

Posted by Kavi Goel and Pravir Gupta, Search Quality team

[Gd] Drupal 7 - faster than ever

| More

Google Code Blog: Drupal 7 - faster than ever

This is a guest post by Owen Barton, partner and director of engineering at CivicActions. Owen has been working with Google's “Make the Web Faster” project team and the Drupal community to make improvements in Drupal 7 front-end performance. This is a condensed version of a more in-depth post over at the CivicActions blog.

Drupal is a popular free and open source publishing platform, powering high profile sites such as The White House, The New York Observer and Amnesty International. The Drupal community has long understood the importance of good front-end performance to successful web sites, being ahead of the game in many ways. This post highlights some of the improvements developed for the upcoming Drupal 7 release, several of which can save an additional second or more of page load times.

Drupal 7 has made its caching system more easily pluggable - to allow for easier memcache integration, for example. It has also enabled caching HTTP headers to be set so that logged out users can cache entire pages locally as well as improve compatibility with reverse proxies and content distribution networks (CDNs). There is also a patch waiting which reduces both the response size and the time taken to generate 404 responses for inlined page assets. Depending on the type of 404 (CSS have a larger effect than images, for example) the slower 404s were adding 0.5 to 1 second to the calling page load times.

Drupal currently has the ability to aggregate multiple CSS and JavaScript files by concatenating them into a smaller number of files to reduce the number of HTTP requests. There is a patch in the queue for Drupal 7 that could allow aggregation to be enabled by default, which is great because the large number of individual files can add anything from 0-1.5 seconds to page loads.

One issue that has become apparent with the Drupal 6 aggregation system is that users can end up downloading aggregate files that include a large amount of duplicate code. On one page the aggregate may contain files a, b and c, whilst on a second page the aggregate may contain files a, b and d - the “c” and “d” files being added conditionally on specific pages. This breaks the benefits of browser caching and slows down subsequent page loads. Benchmarking on core alone shows that avoiding duplicate aggregates can save over a second across 5 page loads. A patch has already been committed that means files need to be explicitly added to the aggregate, and fix Drupal core to add appropriate files to the aggregate unconditionally.

Drupal has supported gzip compression of HTML output for a long time, however for CSS and JavaScript, the files are delivered directly by the webserver, so Drupal has less control. There are webserver based compressors such as Apache’s mod_deflate, but these are not always available. A patch is in the queue that stores compressed versions of aggregated files on write and uses rewrite and header directives in .htaccess that allow these files to be served correctly. Benchmarks show that this patch can make initial page views 20-60% faster, saving anything from 0.3 to 3 seconds total.

The Drupal 7 release promises some real improvements from a front-end performance point of view. Other performance optimizations will no doubt continue to appear and be refined in contributed modules and themes, as well as in site building best practices and documentation. In Drupal 8 we will hopefully see further improvements in the CSS/JS file aggregation system, increased high-level caching effectiveness and hopefully more tools to help site builders reduce file sizes. If you have yet to try Drupal, download it now and give it a try and tell us in the comments if your site performance improves!

By Owen Barton of CivicActions

[Gd] Google Developer Day 2010 Agenda: Android, Chrome & HTML5 and Cloud Platform

| More

Google Code Blog: Google Developer Day 2010 Agenda: Android, Chrome & HTML5 and Cloud Platform

We are now ready to share the Google Developer Day agendas for Tokyo, Sao Paulo, Munich, Moscow and Prague. We have so much technical content to share but alas, Developer Day is a one-day event. There may still be changes to the agenda, but here is a sneak peek at where we are.

Globally, we will feature three major tracks:
  • Android - With the continued momentum and growth of the platform, we would like to continue the conversation with you at Developer Day. We will feature sessions on Android performance, mobile user experience and best practices on building apps, and we will also deep dive on a new feature, Cloud to Device Messaging (C2DM).

  • Chrome & HTML5 - We will discuss how to build an app for the Chrome Web Store and how to improve its development and performance. We’ll show which aspects of HTML5, Chrome Developer Tools and Native Client can be most useful to you. Finally, we will cover everything auth-related to show you when and where to use various authentication tools and how they integrate with our APIs and products.

  • Cloud Platform - Building off of our series of announcements at Google I/O, we will feature sessions on App Engine, App Engine for Business, Spring integration, Google Web Toolkit, Google Storage for Developers, BigQuery and Prediction API. Be prepared for code samples, how to optimize performance and a glimpse into what else is on our roadmap.
We are happy to announce that Eric Tholome, Product Management Director for Developer Products, will be a keynote speaker in Sao Paulo, Munich, Moscow and Prague. In addition, we are happy to invite as our second keynote speaker:
  • Sao Paulo, Brazil - Mario Queiroz, VP Product Management

  • Munich, Germany - Dr. Wieland Holfelder, Engineering Director

  • Moscow, Russia - Dr. Gene Sokolov, Head of Moscow Engineering
Due to the success of the Venture Capital sessions at Google I/O and the growing VC activity in our global markets, a new addition this year is Venture Capital panels at most of our Developer Days. Come hear from your local VCs on what they look for in startups.

The Sao Paulo and Moscow keynote presentations will have live translation, and for sessions, check the FAQ section of your Developer Day site. We will have savvy gurus available to answer your questions during Office Hours, and you will have a chance to meet Googlers and each other over Happy Hour.

Registration will open on September 15th for Sao Paulo and on September 22nd for Munich, Moscow and Prague. Tokyo’s registration is now closed.

In the meanwhile, please follow us on this blog and on Twitter to keep up-to-date with the latest news on Google Developer Day and other development topics: @googledevjp (Japan), @googledevbr (Brazil) and @gddru (Russia).

Hashtags: #gdd2010jp, #gddbr, #gddde, #gddru, #gddcz

By Susan Taing, Google Developer Team

[Gd] SVG documents searchable on Google

| More

Google Code Blog: SVG documents searchable on Google

Just a heads up that it should now be easier for users to find SVG files when searching on Google. That’s right, we’ve expanded our indexing capabilities to include SVG. Feel free to check out our Webmaster Help Center for the complete list of file types we support, and our Webmaster Blog for more information on our SVG announcement.

By Maile Ohye, Google Developer Relations

[Gd] Stable and Beta Channel Updates

| More

Google Chrome Releases: Stable and Beta Channel Updates

Google Chrome 6.0.472.53 has been released to the stable and beta channels for Windows, Mac, and Linux.  Updates from the previous stable release include:
  • Updated UI
  • Form Autofill
  • Syncing of extensions and Autofill data
  • Increased speed and stability
More information on these and other changes in Chrome 6 can be found on the Google Chrome blog. Download Chrome today!

Security fixes and rewards:
Please see the Chromium security page for more detail. Note that the referenced bugs may be kept private until a majority of our users are up to date with the fix.
  • [34414] Low Pop-up blocker bypass with blank frame target. Credit to Google Chrome Security Team (Inferno) and “ironfist99”.
  • [37201] Medium URL bar visual spoofing with homographic sequences. Credit to Chris Weber of Casaba Security.
  • [41654] Medium Apply more restrictions on setting clipboard content. Credit to Brook Novak.
  • [45659] High Stale pointer with SVG filters. Credit to Tavis Ormandy of the Google Security Team.
  • [45876] Medium Possible installed extension enumeration. Credit to Lostmon.
  • [46750] [51846] Low Browser NULL crash with WebSockets. Credit to Google Chrome Security Team (SkyLined), Google Chrome Security Team (Justin Schuh) and Keith Campbell.
  • [$1000] [50386] High Use-after-free in Notifications presenter. Credit to Sergey Glazunov.
  • [50839] High Notification permissions memory corruption. Credit to Michal Zalewski of the Google Security Team and Google Chrome Security Team (SkyLined).
  • [$1337] [51630] [51739] High Integer errors in WebSockets. Credit to Keith Campbell and Google Chrome Security Team (Cris Neckar).
  • [$500] [51653] High Memory corruption with counter nodes. Credit to kuzzcc.
  • [51727] Low Avoid storing excessive autocomplete entries. Credit to Google Chrome Security Team (Inferno).
  • [52443] High Stale pointer in focus handling. Credit to VUPEN Vulnerability Research Team (VUPEN-SR-2010-249).
  • [$1000] [52682] High Sandbox parameter deserialization error. Credit to Ashutosh Mehra and Vineet Batra of the Adobe Reader Sandbox Team.
  • [$500] [53001] Medium Cross-origin image theft. Credit to Isaac Dawson.
This release also fixes [51070] (Windows kernel bug workaround; credit to Marc Schoenefeld), which was incorrectly declared fixed in version 5.0.375.127.

In addition, we would like to credit Google Chrome Security Team (Inferno), James Robinson (Chromium development community), Google Chrome Security Team (Cris Neckar), Aki Helin of OUSPG, Fred Akalin (Chromium development community), Anna Popivanova, “myusualnickname”, Michal Zalewski of the Google Security Team, kuzzcc and Aaron Boodman (Chromium development community) for finding bugs during the development cycle such that they never reached a stable build.

If you find new issues, please let us know by filing a bug.   If you would like to use the stable channel, you can find out more about changing your Chrome channel

Jason Kersey
Google Chrome

[Gd] Brace for the Future

| More

Android Developers Blog: Brace for the Future

[This post is by Dan Morrill, Open Source & Compatibility Program Manager. — Tim Bray]

Way back in November 2007 when Google announced Android, Andy Rubin said “We hope thousands of different phones will be powered by Android.” But now, Android’s growing beyond phones to new kinds of devices. (For instance, you might have read about the new 7” Galaxy Tab that our partners at Samsung just announced.) So, I wanted to point out a few interesting new gadgets that are coming soon running the latest versions of Android, 2.1 and 2.2.

For starters, the first Android-based non-phone handheld devices will be shipping over the next few months. Some people call these Mobile Internet Devices or Personal Media Players — MIDs or PMPs. Except for the phone part, PMP/MID devices look and work just like smartphones, but if your app really does require phone hardware to work correctly, you can follow some simple steps to make sure your app only appears on phones.

Next up are tablets. Besides the Samsung Galaxy Tab I mentioned, the Dell Streak is now on sale, which has a 5” screen and blurs the line between a phone and a tablet. Of course, Android has supported screens of any size since version 1.6, but these are the first large-screen devices to actually ship with Android Market. A tablet’s biggest quirk, of course, is its larger screen.

It’s pretty rare that we see problems with existing apps running on large-screen devices, but at the same time many apps would benefit from making better use of the additional screen space. For instance, an email app might be improved by changing its UI from a list-oriented layout to a two-pane view. Fortunately, Android and the SDK make it easy to support multiple screen sizes in your app, so you can read up on our documentation and make sure your app makes the best use of the extra space on large screens.

Speaking of screen quirks, we’re also seeing the first devices whose natural screen orientation is landscape. For instance, Motorola’s CHARM and FLIPOUT phones have screens which are wider than they are tall, when used in the natural orientation. The majority of apps won’t even notice the difference, but if your app uses sensors like accelerometer or compass, you might need to double-check your code.

Now, the devices I’ve mentioned so far still have the same hardware that Android phones have, like compass and accelerometer sensors, cameras, and so on. However, there are also devices coming that will omit some of this hardware. For instance, you’ve probably heard of Google TV, which will get Android Market in 2011. Since Google TV is, you know, a stationary object, it won’t have a compass and accelerometer. It also won’t have a standard camera, since we decided there wasn’t a big audience for pictures of the dust bunnies behind your TV.

Fortunately, you can use our built-in tools to handle these cases and control which devices your app appears to in Android Market. Android lets you provide versions of your UI optimized for various screen configurations, and each device will pick the one that runs best. Meanwhile, Android Market will make sure your apps only appear to devices that can run them, by matching those features you list as required (via tags) only with devices that have those features.

Android started on phones, but we’re growing to fit new kinds of devices. Now your Android app can run on almost anything, and the potential size of your audience is growing fast. But to fully unlock this additional reach, you should double-check your app and tweak it if you need to, so that it puts its best foot forward. Watch this blog over the next few weeks, as we post a series of detailed “tips and tricks” articles on how to get the most out of the new gadgets.

It’s official folks: we’re living in the future! Happy coding.


[Gd] Securing Android LVL Applications

| More

Android Developers Blog: Securing Android LVL Applications

[This post is by Trevor Johns, who's a Developer Programs Engineer working on Android. — Tim Bray]

The Android Market licensing service is a powerful tool for protecting your applications against unauthorized use. The License Verification Library (LVL) is a key component. A determined attacker who’s willing to disassemble and reassemble code can eventually hack around the service; but application developers can make the hackers’ task immensely more difficult, to the point where it may simply not be worth their time.

Out of the box, the LVL protects against casual piracy; users who try to copy APKs directly from one device to another without purchasing the application. Here are some techniques to make things hard, even for technically skilled attackers who attempt to decompile your application and remove or disable LVL-related code.

  • You can obfuscate your application to make it difficult to reverse-engineer.

  • You can modify the licensing library itself to make it difficult to apply common cracking techniques.

  • You can make your application tamper-resistant.

  • You can offload license validation to a trusted server.

This can and should be done differently by each app developer. A guiding principle in the design of the licensing service is that attackers must be forced to crack each application individually, and unfortunately no client-side code can be made 100% secure. As a result, we depend on developers introducing additional complexity and heterogeneity into the license check code — something which requires human ingenuity and and a detailed knowledge of the application the license library is being integrated into.

Technique: Code Obfuscation

The first line of defense in your application should be code obfuscation. Code obfuscation will not protect against automated attacks, and it doesn’t alter the flow of your program. However, it does make it more difficult for attackers to write the initial attack for an application, by removing symbols that would quickly reveal the original structure of a compiled application. As such, we strongly recommend using code obfuscation in all LVL installations.

To understand what an obfuscator does, consider the build process for your application: Your application is compiled and converted into .dex files and packaged in an APK for distribution on devices. The bytecode contains references to the original code — packages, classes, methods, and fields all retain their original (human readable) names in the compiled code. Attackers use this information to help reverse-engineer your program, and ultimately disable the license check.

Obfuscators replace these names with short, machine generated alternatives. Rather than seeing a call to dontAllow(), an attacker would see a call to a(). This makes it more difficult to intuit the purpose of these functions without access to the original source code.

There are a number of commercial and open-source obfuscators available for Java that will work with Android. We have had good experience with ProGuard, but we encourage you to explore a range of obfuscators to find the solution that works best for you.

We will be publishing a separate article soon that provides detailed advice on working with ProGuard. Until then, please refer to the ProGuard documentation.

Technique: Modifying the license library

The second line of defense against attack from crackers is to modify the license verification library in such a way that it’s difficult for an attacker to modify the disassembled code and get a positive license check as result.

This actually provides protection against two different types of attack: it protects against attackers trying to crack your application, but it also prevents attacks designed to target other applications (or even the stock LVL distribution itself) from being easily ported over to your application. The goal should be to both increase the complexity of your application’s bytecode and make your application’s LVL implementation unique.

When modifying the license library, there are three areas that you will want to focus on:

  • The core licensing library logic.

  • The entry/exit points of the licensing library.

  • How your application invokes the licensing library and handles the license response.

In the case of the core licensing library, you’ll primarily want to focus on two classes which comprise the core of the LVL logic: LicenseChecker and LicenseValidator.

Quite simply, your goal is to modify these two classes as much as possible, in any way possible, while still retaining the original function of the application. Here are some ideas to get you started, but you’re encouraged to be creative:

  • Replace switch statements with if statements.

  • Use XOR or hash functions to derive new values for any constants used and check for those instead.

  • Remove unused code. For instance, if you’re sure you won’t need swappable policies, remove the Policy interface and implement the policy verification inline with the rest of LicenseValidator.

  • Move the entirety of the LVL into your own application’s package.

  • Spawn additional threads to handle different parts of license validation.

  • Replace functions with inline code where possible.

For example, consider the following function from LicenseValidator:

public void verify(PublicKey publicKey, int responseCode, String signedData, String signature) {
// ... Response validation code omitted for brevity ...
switch (responseCode) {
// In Java bytecode, LICENSED will be converted to the constant 0x0
LicenseResponse limiterResponse = mDeviceLimiter.isDeviceAllowed(userId);
handleResponse(limiterResponse, data);
// NOT_LICENSED will be converted to the constant 0x1
handleResponse(LicenseResponse.NOT_LICENSED, data);
// ... Extra response codes also removed for brevity ...

In this example, an attacker might try to swap the code belonging to the LICENSED and NOT_LICENSED cases, so that an unlicensed user will be treated as licensed. The integer values for LICENSED (0x0) and NOT_LICENSED (0x1) will be known to an attacker by studying the LVL source, so even obfuscation makes it very easy to locate where this check is performed in your application’s bytecode.

To make this more difficult, consider the following modification:

public void verify(PublicKey publicKey, int responseCode, String signedData, String signature) {
// ... Response validation code omitted for brevity …

// Compute a derivative version of the response code
// Ideally, this should be placed as far from the responseCode switch as possible,
// to prevent attackers from noticing the call to the CRC32 library, which would be
// a strong hint as to what we're done here. If you can add additional transformations
// elsewhere in before this value is used, that's even better. crc32 = new;
int transformedResponseCode = crc32.getValue();

// ... put unrelated application code here ...
// crc32(LICENSED) == 3523407757
if (transformedResponse == 3523407757) {
LicenseResponse limiterResponse = mDeviceLimiter.isDeviceAllowed(userId);
handleResponse(limiterResponse, data);
// ... put unrelated application code here ...
// crc32(LICENSED_OLD_KEY) == 1007455905
if (transformedResponseCode == 1007455905) {
LicenseResponse limiterResponse = mDeviceLimiter.isDeviceAllowed(userId);
handleResponse(limiterResponse, data);
// ... put unrelated application code here ...
// crc32(NOT_LICENSED) == 2768625435
if (transformedResponseCode == 2768625435):

In this example, we’ve added additional code to transform the license response code into a different value. We’ve also removed the switch block, allowing us to inject unrelated application code between the three license response checks. (Remember: The goal is to make your application’s LVL implementation unique. Do not copy the code above verbatim — come up with your own approach.)

For the entry/exit points, be aware that attackers may try to write a counterfeit version of the LVL that implements the same public interface, then try to swap out the relevant classes in your application. To prevent this, consider adding additional arguments to the LicenseChecker constructor, as well as allow() and dontAllow() in the LicenseCheckerCallback. For example, you could pass in a nonce (a unique value) to LicenseChecker that must also be present when calling allow().

Note: Renaming allow() and dontAllow() won’t make a difference, assuming that you’re using an obfuscator. The obfuscator will automatically rename these functions for you.

Be aware that attackers might try and attack the calls in your application to the LVL. For example, if you display a dialogue on license failure with an “Exit” button, consider what would happen if an attacker were to comment out the line of code that displayed that window. If the user never pushes the “Exit” button in the dialog (which is no not being displayed) will your application still terminate? To prevent this, consider invoking a different Activity to handle informing a user that their license is invalid, and immediately terminating the original Activity; add additional finish() statements to other parts of your code that get will get executed in case the original one gets disabled; or set a timer that will cause your application to be terminated after a timeout. It’s also a good idea to defer the license check until your application has been running a few minutes, since attackers will be expecting the license check to occur during your application’s launch.

Finally, be aware that certain methods cannot be obfuscated, even when using a tool such as ProGuard. As a key example, onCreate() cannot be renamed, since it needs to remain callable by the Android system. Avoid putting license check code in these methods, since attackers will be looking for the LVL there.

Technique: Make your application tamper-resistant

In order for an attacker to remove the LVL from your code, they have to modify your code. Unless done precisely, this can be detected by your code. There are a few approaches you can use here.

The most obvious mechanism is to use a lightweight hash function, such as CRC32, and build a hash of your application’s code. You can then compare this checksum with a known good value. You can find the path of your application’s files by calling context.GetApplicationInfo() — just be sure not to compute a checksum of the file that contains your checksum! (Consider storing this information on a third-party server.)

[In a late edit, we removed a suggestion that you use a check that relies on GetInstallerPackageName when our of our senior engineers pointed out that this is undocumented, unsupported, and only happens to work by accident. –Tim]

Also, you can check to see if your application is debuggable. If your application tries to keep itself from performing normally if the debug flag is set, it may be harder for an attacker to compromise:

boolean isDebuggable =  ( 0 != ( getApplcationInfo().flags &= ApplicationInfo.FLAG_DEBUGGABLE ) );

Technique: Offload license validation to a trusted server

If your application has an online component, a very powerful technique to prevent piracy is to send a copy of the license server response, contained inside the ResponseData class, along with its signature, to your online server. Your server can then verify that the user is licensed, and if not refuse to serve any online content.

Since the license response is cryptographically signed, your server can check to make sure that the license response hasn’t been tampered with by using the public RSA key stored in the Android Market publisher console.

When performing the server-side validation, you will want to check all of the following:

  • That the response signature is valid.

  • That the license service returned a LICENSED response.

  • That the package name and version code match the correct application.

  • That the license response has not expired (check the VT license response extra).

  • You should also log the userId field to ensure that a cracked application isn’t replaying a license response from another licensed user. (This would be visible by an abnormally high number of license checks coming from a single userId.)

To see how to properly verify a license response, look at LicenseValidator.verify().

As long as the license check is entirely handled within server-code (and your server itself is secure), it’s worth nothing that even an expert cracker cannot circumvent this mechanism. This is because your server is a trusted computing environment.

Remember that any code running on a computer under the user’s control (including their Android device) is untrusted. If you choose to inform the user that the server-side license validation has failed, this must only be done in an advisory capacity. You must still make sure that your server refuses to serve any content to an unlicensed user.


In summary, remember that your goal as an application developer is to make your application’s LVL implementation unique, difficult to trace when decompiled, and resistant to any changes that might be introduced. Realize that this might involve modifying your code in ways that seem counter-intuitive from a traditional software engineering viewpoint, such as removing functions and hiding license check routines inside unrelated code.

For added protection, consider moving the license check to a trusted server, where attackers will be unable to modify the license check code. While it’s impossible to write 100% secure validation code on client devices, this is attainable on a machine under your control.

And above all else, be creative. You have the advantage in that you have access to a fully annotated copy of your source code — attackers will be working with uncommented bytecode. Use this to your advantage.

Remember that, assuming you’ve followed the guidelines here, attackers will need to crack each new version of your application. Add new features and release often, and consider modifying your LVL implementation with each release to create additional work for attackers.

And above all else, listen to your users and keep them happy. The best defense against piracy isn’t technical, it’s emotional.