Friday, April 24, 2009

[Gd] Introducing GLSurfaceView

| More

Android Developers Blog: Introducing GLSurfaceView

GLSurfaceView is a new API class in Android 1.5. GLSurfaceView makes OpenGL ES applications easier to write by:

  • Providing the glue code to connect OpenGL ES to the View system.
  • Providing the glue code to make OpenGL ES work with the Activity life-cycle.
  • Making it easy to choose an appropriate frame buffer pixel format.
  • Creating and managing a separate rendering thread to enable smooth animation.
  • Providing easy-to-use debugging tools for tracing OpenGL ES API calls and checking for errors.

GLSurfaceView is a good base for building an application that uses OpenGL ES for part or all of its rendering. A 2D or 3D action game would be a good candidate, as would a 2D or 3D data visualization application such as Google Maps StreetView.

The Simplest GLSurfaceView Application

Here's the source code to the simplest possible OpenGL ES application:


import javax.microedition.khronos.egl.EGLConfig;
import javax.microedition.khronos.opengles.GL10;

import android.opengl.GLSurfaceView;
import android.os.Bundle;

public class ClearActivity extends Activity {
protected void onCreate(Bundle savedInstanceState) {
mGLView = new GLSurfaceView(this);
mGLView.setRenderer(new ClearRenderer());

protected void onPause() {

protected void onResume() {

private GLSurfaceView mGLView;

class ClearRenderer implements GLSurfaceView.Renderer {
public void onSurfaceCreated(GL10 gl, EGLConfig config) {
// Do nothing special.

public void onSurfaceChanged(GL10 gl, int w, int h) {
gl.glViewport(0, 0, w, h);

public void onDrawFrame(GL10 gl) {

This program doesn't do much: it clears the screen to black on every frame. But it is a complete OpenGL application, that correctly implements the Android activity life-cycle. It pauses rendering when the activity is paused, and resumes it when the activity is resumed. You could use this application as the basis for non-interactive demonstration programs. Just add more OpenGL calls to the ClearRenderer.onDrawFrame method. Notice that you don't even need to subclass the GLSurfaceView view.

Note that the GLSurfaceView.Renderer interface has three methods:

The onSurfaceCreated() method is called at the start of rendering, and whenever the OpenGL ES drawing context has to be recreated. (The drawing context is typically lost and recreated when the activity is paused and resumed.) OnSurfaceCreated() is a good place to create long-lived OpenGL resources like textures.

The onSurfaceChanged() method is called when the surface changes size. It's a good place to set your OpenGL viewport. You may also want to set your camera here, if it's a fixed camera that doesn't move around the scene.

The onDrawFrame() method is called every frame, and is responsible for drawing the scene. You would typically start by calling glClear to clear the framebuffer, followed by other OpenGL ES calls to draw the current scene.

How about User Input?

If you want an interactive application (like a game), you will typically subclass GLSurfaceView, because that's an easy way of obtaining input events. Here's a slightly longer example showing how to do that:


import javax.microedition.khronos.egl.EGLConfig;
import javax.microedition.khronos.opengles.GL10;

import android.content.Context;
import android.opengl.GLSurfaceView;
import android.os.Bundle;
import android.view.MotionEvent;

public class ClearActivity extends Activity {
protected void onCreate(Bundle savedInstanceState) {
mGLView = new ClearGLSurfaceView(this);

protected void onPause() {

protected void onResume() {

private GLSurfaceView mGLView;

class ClearGLSurfaceView extends GLSurfaceView {
public ClearGLSurfaceView(Context context) {
mRenderer = new ClearRenderer();

public boolean onTouchEvent(final MotionEvent event) {
queueEvent(new Runnable(){
public void run() {
mRenderer.setColor(event.getX() / getWidth(),
event.getY() / getHeight(), 1.0f);
return true;

ClearRenderer mRenderer;

class ClearRenderer implements GLSurfaceView.Renderer {
public void onSurfaceCreated(GL10 gl, EGLConfig config) {
// Do nothing special.

public void onSurfaceChanged(GL10 gl, int w, int h) {
gl.glViewport(0, 0, w, h);

public void onDrawFrame(GL10 gl) {
gl.glClearColor(mRed, mGreen, mBlue, 1.0f);

public void setColor(float r, float g, float b) {
mRed = r;
mGreen = g;
mBlue = b;

private float mRed;
private float mGreen;
private float mBlue;

This application clears the screen every frame. When you tap on the screen, it sets the clear color based on the (x,y) coordinates of your touch event. Note the use of queueEvent() in ClearGLSurfaceView.onTouchEvent(). The queueEvent() method is used to safely communicate between the UI thread and the rendering thread. If you prefer you can use some other Java cross-thread communication technique, such as synchronized methods on the Renderer class itself. But queueing events is often the simplest way of dealing with cross-thread communication.

Other GLSurfaceView Samples

Tired of just clearing the screen? You can find more interesting samples in the API Demos sample in the SDK. All the OpenGL ES samples have been converted to use the GLSurfaceView view:

  • GLSurfaceView - a spinning triangle
  • Kube - a cube puzzle demo
  • Translucent GLSurfaceView - shows how to display 3D graphics on a translucent background
  • Textured Triangle - shows how to draw a textured 3D triangle
  • Sprite Text - shows how to draw text into a texture and then composite it into a 3D scene
  • Touch Rotate - shows how to rotate a 3D object in response to user input.

Choosing a Surface

GLSurfaceView helps you choose the type of surface to render to. Different Android devices support different types of surfaces, with no common subset. This makes it tricky problem to choose the best available surface on each device. By default GLSurfaceView tries to find a surface that's as close as possible to a 16-bit RGB frame buffer with a 16-bit depth buffer. Depending upon your application's needs you may want to change this behavior. For example, the Translucent GLSurfaceView sample needs an Alpha channel in order to render translucent data. GLSurfaceView provides an overloaded setEGLSurfaceChooser() method to give the developer control over which surface type is chosen:

setEGLConfigChooser(boolean needDepth)
Choose a config that's closest to R5G6B5 with or without a 16-bit framebuffer
setEGLConfigChooser(int redSize, int greenSize,int blueSize, int alphaSize,int depthSize, int stencilSize)
Choose the config with the fewest number of bits per pixel that has at least as many bits-per-channel as specified in the constructor.
setEGLConfigChooser(EGLConfigChooser configChooser)
Allow total control over choosing a configuration. You pass in your own implementation of EGLConfigChooser, which gets to inspect the device's capabilities and choose a configuration.

Continuous Rendering vs. Render When Dirty

Most 3D applications, such as games or simulations, are continuously animated. But some 3D applications are more reactive: they wait passively until the user does something, and then react to it. For those types of applications, the default GLSurfaceView behavior of continuously redrawing the screen is a waste of time. If you are developing a reactive application, you can call GLSurfaceView.setRenderMode(RENDERMODE_WHEN_DIRTY), which turns off the continuous animation. Then you call GLSurfaceView.requestRender() whenever you want to re-render.

Help With Debugging

GLSurfaceView has a handy built-in feature for debugging OpenGL ES applications: the GLSurfaceView.setDebugFlags() method can be used to enable logging and/or error checking your OpenGL ES calls. Call this method in your GLSurfaceView's constructor, before calling setRenderer():

public ClearGLSurfaceView(Context context) {
// Turn on error-checking and logging
mRenderer = new ClearRenderer();

Learn about Android 1.5 and more at Google I/O. Members of the Android team will be there to give a series of in-depth technical sessions and to field your toughest questions.


[Gd] Tips on requesting reconsideration

| More

Official Google Webmaster Central Blog: Tips on requesting reconsideration

Do you think your site might be penalized because of something that
happened on it? As two leaders of the reconsideration team, we recently made
a video to help you discover how to create a good reconsideration request,
including tips on what we look for on our side. Watch the video and then
let us know if you have questions in the comments!

Posted by Rachel Searles and Brian White, Search Quality Team

[Gd] Mercurial support for Project Hosting on Google Code

| More

Google Code Blog: Mercurial support for Project Hosting on Google Code

We are happy to announce that Project Hosting on Google Code now supports the Mercurial version control system in addition to Subversion. This is being initially rolled out as a preview release to a few invited users on a per-project basis, so that we can iron out the kinks before making this available to the general public.

Mercurial, like Git and Bazaar, is a distributed version control system (DVCS) that enables developers to work offline and define more complex workflows such as peer-to-peer pushing/pulling of code. It also makes it easier for outside contributors to contribute to projects, as cloning and merging of remote repositories is really easy.

While there were several DVCSs that we could support, our decision to support Mercurial was based on two key reasons. The primary reason was to support our large base of existing Subversion users that want to use a distributed version control system. For these users we felt that Mercurial had the lowest barrier to adoption because of its similar command set, great documentation (including a great online book), and excellent tools such as Tortoise Hg. Second, given that Google Code's infrastructure is built for HTTP-based services, we found that Mercurial had the best protocol and performance characteristics for HTTP support. For more information, see our analysis.

If you would like to help us launch Mercurial and to try out the features as an invited user, please fill out the following form. We are currently looking for active projects with more than two users that are willing to try out Mercurial and work with us to identify issues and resolve them. For projects that plan on migrating from Subversion, see our conversion docs for the steps required for this process.

Our implementation of Mercurial is built on top of Bigtable, making it extremely scalable and reliable just like our Subversion on Bigtable implementation. For more information on our Mercurial implementation, we will have a TechTalk at Google IO that will be led by Jacob Lee, one of the core engineers working on Mercurial support. Let us know if you plan on attending and we'll give you access to Mercurial ahead of the talk.

By David Baum, Software Engineer

[Gd] Live folders

| More

Android Developers Blog: Live folders

Live folders have been introduced in Android 1.5 and let you display any source of data on the Home screen without forcing the user to launch an application. A live folder is simply a real-time view of a ContentProvider. As such, a live folder can be used to display all your contacts, your bookmarks, your email, your playlists, an RSS feed, etc. The possibilities are endless! Android 1.5 ships with a few stock live folders to display your contacts. For instance, the screenshot below shows the content of the live folders that displays all my contacts with a phone number:

If a contacts sync happens in the background while I'm browsing this live folder, I will see the change happen in real-time. Live folders are not only useful but it's also very easy to modify your application to make it provider a live folder. In this article, I will show you how to add a live folder to the Shelves application. You can download its source code and modify it by following my instructions to better understand how live folders work.

To give the user the option to create a new live folder, you first need to create a new activity with an intent filter who action is android.intent.action.CREATE_LIVE_FOLDER. To do so, simply open AndroidManifest.xml and add something similar to this:

<action android:name="android.intent.action.CREATE_LIVE_FOLDER" />
<category android:name="android.intent.category.DEFAULT" />

The label and icon of this activity are what the user will see on the Home screen when choosing a live folder to create:

Since you just need an intent filter, it is possible, and sometimes advised, to reuse an existing activity. In the case of Shelves, we will create a new activity, The role of this activity is to send an Intent result to Home containing the description of the live folder: its name, icon, display mode and content URI. The content URI is very important as it describes what ContentProvider will be used to populate the live folder. The code of the activity is very simple as you can see here:

public class BookShelfLiveFolder extends Activity {
public static final Uri CONTENT_URI = Uri.parse("content://shelves/live_folders/books");

protected void onCreate(Bundle savedInstanceState) {

final Intent intent = getIntent();
final String action = intent.getAction();

if (LiveFolders.ACTION_CREATE_LIVE_FOLDER.equals(action)) {
setResult(RESULT_OK, createLiveFolder(this, CONTENT_URI,
"Books", R.drawable.ic_live_folder));
} else {


private static Intent createLiveFolder(Context context, Uri uri, String name, int icon) {
final Intent intent = new Intent();

intent.putExtra(LiveFolders.EXTRA_LIVE_FOLDER_NAME, name);
Intent.ShortcutIconResource.fromContext(context, icon));

return intent;

This activity, when invoked with theACTION_CREATE_LIVE_FOLDER intent, returns an intent with a URI, content://shelves/live_folders/books, and three extras to describe the live folder. There are other extras and constants you can use and you should refer to the documentation of android.provider.LiveFolders for more details. When Home receives this intent, a new live folder is created on the user's desktop, with the name and icon you provided. Then, when the user clicks on the live folder to open it, Home queries the content provider referenced by the provided URI.

Live folders' content providers must obey specific naming rules. The Cursor returned by the query() method must have at least two columns named LiveFolders._ID and LiveFolders.NAME. The first one is the unique identifier of each item in the live folder and the second one is the name of the item. There are other column names you can use to specify an icon, a description, the intent to associate with the item (fired when the user clicks that item), etc. Again, refer to the documentation of android.provider.LiveFolders for more details.

In our example, all we need to do is modify the existing provider in Shelves called First, we need to modify the URI_MATCHER to recognize our content://shelves/live_folders/books content URI:

private static final int LIVE_FOLDER_BOOKS = 4;
// ...

Then we need to create a new projection map for the cursor. A projection map can be used to "rename" columns. In our case, we will replace BooksStore.Book._ID, BooksStore.Book.TITLE and BooksStore.Book.AUTHORS with LiveFolders._ID, LiveFolders.TITLE and LiveFolders.DESCRIPTION:

private static final HashMap LIVE_FOLDER_PROJECTION_MAP;
static {
LIVE_FOLDER_PROJECTION_MAP.put(LiveFolders._ID, BooksStore.Book._ID +
" AS " + LiveFolders._ID);
" AS " + LiveFolders.NAME);
" AS " + LiveFolders.DESCRIPTION);

Because we are providing a title and a description for each row, Home will automatically display each item of the live folder with two lines of text. Finally, we implement the query() method by supplying our projection map to the SQL query builder:

public Cursor query(Uri uri, String[] projection, String selection,
String[] selectionArgs, String sortOrder) {

SQLiteQueryBuilder qb = new SQLiteQueryBuilder();

switch (URI_MATCHER.match(uri)) {
// ...
throw new IllegalArgumentException("Unknown URI " + uri);

SQLiteDatabase db = mOpenHelper.getReadableDatabase();
Cursor c = qb.query(db, projection, selection, selectionArgs, null, null, BooksStore.Book.DEFAULT_SORT_ORDER);
c.setNotificationUri(getContext().getContentResolver(), uri);

return c;

You can now compile and deploy the application, go to the Home screen and try to add a live folder. I added a books live folder to my Home screen and when I open it, I can see the list of all of my books, with their titles and authors, and all it took was a few lines of code:

The live folders API is extremely simple and relies only on intents and content URI. If you want to see more examples of live folders implementation, you can read the source code of the Contacts application and of the Contacts provider.

You can also download the result of our exercise, the modified version of Shelves with live folders support.

Learn about Android 1.5 and more at Google I/O. Members of the Android team will be there to give a series of in-depth technical sessions and to field your toughest questions.


Thursday, April 23, 2009

[Gd] Stable Update: Security Fix

| More

Google Chrome Releases: Stable Update: Security Fix

Google Chrome's Stable channel has been updated to to fix a security issue:

CVE-2009-1340 ChromeHTML protocol handler same-origin bypass
An error in handling URLs with a chromehtml: protocol could allow an attacker to run scripts of his choosing on any page or enumerate files on the local disk under certain conditions.

If a user has Google Chrome installed, visiting an attacker-controlled web page in Internet Explorer could have caused Google Chrome to launch, open multiple tabs, and load scripts that run after navigating to a URL of the attacker's choice. Such an attack only works if Chrome is not already running.

See for more details.

Affected versions: and earlier

Severity: High. This allows universal cross-site scripting (UXSS) without user interaction under certain conditions.

Credit: Roi Saltzman ( Security Researcher at IBM Rational Application Security Research Group

--Mark Larson
Google Chrome Program Manager

[Gd] Calling all JavaScripters: submit your Chrome Experiments for Google I/O!

| More

Google Code Blog: Calling all JavaScripters: submit your Chrome Experiments for Google I/O!

We launched Chrome Experiments last month to feature some of the crazy things that are now possible with JavaScript. Since then, a number of developers have submitted additional experiments, many of which have made us smile -- take a look at a few of them.

In May, we're going to feature Chrome Experiments during Google I/O, our largest developer event (May 27 - 28), as well as on the Google Code blog.

So if you haven't already started experimenting, here's your chance to create something cool, fun, or quirky with JavaScript. Please submit it by May 26th. We'll reveal the top ten experiments here on the Google Code Blog during Google I/O.

Happy experimenting!

By Aaron Koblin, Google Creative Lab

[Gd] Future-Proofing Your Apps

| More

Android Developers Blog: Future-Proofing Your Apps

Hi, developers! I hope you've heard about the early-look version of the Android 1.5 SDK that we recently released. There are some great new features in there, but don't get too excited yet -- some of you will need to fix some problems in your apps before you can start taking advantage of Android 1.5.

We've done some fairly extensive testing of the popular apps on the Android Market, and it turns out that a few of those apps use some bad techniques that cause them to crash or behave strangely on Android 1.5. The list below is based on our observations of five ways that we've seen bad apps fail on 1.5. You can think of these as "anti-patterns" (that is, techniques to avoid) for Android development. If you've written an app with the Android 1.0 or 1.1 SDKs, you'll need to pay close attention.

Technique to Avoid, #1: Using Internal APIs

Even though we've always strongly advised against doing so, some developers have chosen to use unsupported or internal APIs. For instance, many developers are using the internal brightness control and bluetooth toggle APIs that were present in 1.0 and 1.1. A bug -- which is now fixed in Android 1.5 -- allowed apps to use those APIs without requesting permission. As a result, apps that use those APIs will break on 1.5. There are other changes to unsupported APIs in 1.5 besides these, so if you've used internal APIs in your apps, you need to update your apps to stop doing so. Even if they don't break on Android 1.5, there's a good chance they will on some later version. (There's some good news, though: because "flashlight" apps are so popular, we've added the "screenBrightness" field on the WindowManager.LayoutParams class just for that use case.)

Technique to Avoid, #2: Directly Manipulating Settings

Okay, strictly speaking this one isn't evil, since this is a change in behavior that we made to Android itself. But we made it because some developers were doing naughty things: a number of apps were changing system settings silently without even notifying the user. For instance, some apps turn on GPS without asking the user, and others might turn on data roaming.

As a result, applications can no longer directly manipulate the values of certain system Settings, even if they previously had permission to do so. For instance, apps can no longer directly turn on or off GPS. These apps won't crash, but the APIs in question now have no effect, and do nothing. Instead, apps will need to issue an Intent to launch the appropriate Settings configuration screen, so that the user can change these settings manually. For details, see the android.provider.Settings.Secure class, which you can find in the 1.5_pre SDK documentation (and later). Note that only Settings that were moved to the Settings.Secure class are affected. Other, less sensitive, settings will continue to have the same behavior as in Android 1.1.

Technique to Avoid, #3: Going Overboard with Layouts

Due to changes in the View rendering infrastructure, unreasonably deep (more than 10 or so) or broad (more than 30 total) View hierarchies in layouts are now likely to cause crashes. This was always a risk for excessively complex layouts, but you can think of Android 1.5 as being better than 1.1 at exposing this problem. Most developers won't need to worry about this, but if your app has very complicated layouts, you'll need to put it on a diet. You can simplify your layouts using the more advanced layout classes like FrameLayout and TableLayout.

Technique to Avoid, #4: Bad Hardware Assumptions

Android 1.5 includes support for soft keyboards, and there will soon be many devices that run Android but do not have physical keyboards. If your application assumes the presence of a physical keyboard (such as if you have created a custom View that sinks keypress events) you should make sure it degrades gracefully on devices that only have soft keyboards. For more information on this, keep on eye on this blog as we'll be posting more detailed information about handling the new soft keyboards.

Technique to Avoid, #5: Incautious Rotations

Devices running Android 1.5 and later can automatically rotate the screen, depending on how the user orients the device. Some 1.5 devices will do this by default, and on all others it can be turned on by the user. This can sometimes result in unpredictable behavior from applications that do their own reorientations (whether using the accelerometer, or something else.) This often happens when applications assume that the screen can only rotate if the physical keyboard is exposed; if the device lacks a physical keyboard, these apps do not expect to be reoriented, which is a coding error. Developers should be sure that their applications can gracefully handle being reoriented at any time.

Also, apps that use the accelerometer directly to reorient themselves sometimes compete with the system doing the same thing, with odd results. And finally, some apps that use the accelerometer to detect things like shaking motions and that don't lock their orientation to portrait or landscape, often end up flipping back and forth between orientations. This can be irritating to the user. (You can lock your app's orientation to portrait or landscape using the 'android:screenOrientation' attribute in your AndroidManifest.xml.)

Have any of your apps used one of these dubious techniques? If so, break out your IDE, duct tape, and spackle, and patch 'em up. I'm pretty excited by the new features in the Android 1.5 SDK, and I look forward to seeing your apps on my own 1.5-equipped phone -- but I can't, if they won't run! Fortunately, the fixes for these are pretty simple, and you can start fixing all of the above even with the 1.1_r1 SDK release.

By the way, if you'd like to fully immerse yourself in Android 1.5, join us at Google I/O! It's my pleasure to shamelessly plug an event that's shaping up to be the Android developer event of the year. We've added two more sessions—one on multimedia jujitsu, and a particularly interesting session on the Eyes-Free Android project—with even more yet to come. I thought Google I/O was a pretty killer event last year, and this year's looking even better, especially in terms of Android content.

I hope to meet many of you there, but either way, Happy Coding!


Wednesday, April 22, 2009

[Gd] SDK version 1.2.1 released

| More

Google App Engine Blog: SDK version 1.2.1 released

We've released version 1.2.1 of the SDK for the Python runtime. Here are some highlights from the release notes:

  • Stable, unique IDs for
    User objects. The Users service now provides a unique user_id for each
    user that stays the same even if a user changes her email address.

  • The Images API now supports compositing images and calculating a color histogram for an image.

  • New allowed mail attachment types: ics, vcf

  • Urlfetch requests can now set the User-Agent header.

  • An App Engine-specific version of the Python PyCrypto cryptography library is now available. Learn more at

  • The bulk loader configuration format has changed to allow non-CSV input.
    This change is not backwards compatible, so if you've written code for the bulk loader, it will need to be updated.

  • An early release of the bulk downloader is also now
    available in Learn more about these changes at:

For a full list see the SdkReleaseNotes wiki page.

Downloads for Windows, Mac, and Linux are available on the Downloads page. This SDK update was for the Python runtime, so please post your feedback in the Python runtime discussion group.

Posted by Jeff Scudder, App Engine Team

[Gd] Creating an Input Method

| More

Android Developers Blog: Creating an Input Method

To create an input method (IME) for entering text into text fields and other Views, you need to extend android.inputmethodservice.InputMethodService. This API provides much of the basic implementation for an input method, in terms of managing the state and visibility of the input method and communicating with the currently visible activity.

A good starting point would be the SoftKeyboard sample code provided as part of the SDK. Modify this code to start building your own input method.

An input method is packaged like any other application or service. In the AndroidManifest.xml file, you declare the input method as a service, with the appropriate intent filter and any associated meta data:

<manifest xmlns:android=""

<application android:label="@string/app_label">

<!-- Declares the input method service -->
<service android:name="FastInputIME"
<action android:name="android.view.InputMethod" />
<meta-data android:name="" android:resource="@xml/method" />

<!-- Optional activities. A good idea to have some user settings. -->
<activity android:name="FastInputIMESettings" android:label="@string/fast_input_settings">
<action android:name="android.intent.action.MAIN"/>

If your input method allows the user to tweak some settings, you should provide a settings activity that can be launched from the Settings application. This is optional and you may choose to provide all user settings directly in your IME's UI.

The typical life-cycle of an InputMethodService looks like this:

Visual Elements

There are 2 main visual elements for an input method—the input view and the candidates view. You don't have to follow this style though, if one of them is not relevant to your input method experience.

Input View

This is where the user can input text either in the form of keypresses, handwriting or other gestures. When the input method is displayed for the first time, InputMethodService.onCreateInputView() will be called. Create and return the view hierarchy that you would like to display in the input method window.

Candidates View

This is where potential word corrections or completions are presented to the user for selection. Again, this may or may not be relevant to your input method and you can return null from calls to InputMethodService.onCreateCandidatesView(), which is the default behavior.

Designing for the different Input Types

An application's text fields can have different input types specified on them, such as free form text, numeric, URL, email address and search. When you implement a new input method, you need to be aware of the different input types. Input methods are not automatically switched for different input types and so you need to support all types in your IME. However, the IME is not responsible for validating the input sent to the application. That's the responsibility of the application.

For example, the LatinIME provided with the Android platform provides different layouts for text and phone number entry:

InputMethodService.onStartInputView() is called with an EditorInfo object that contains details about the input type and other attributes of the application's text field.

(EditorInfo.inputType & EditorInfo.TYPE_CLASS_MASK) can be one of many different values, including:


See android.text.InputType for more details.

EditorInfo.inputType can contain other masked bits that indicate the class variation and other flags. For example, TYPE_TEXT_VARIATION_PASSWORD or TYPE_TEXT_VARIATION_URI or TYPE_TEXT_FLAG_AUTO_COMPLETE.

Password fields

Pay specific attention when sending text to password fields. Make sure that the password is not visible within your UI - in neither the input view nor the candidates view. And do not save the password anywhere without explicitly informing the user.

Landscape vs. portrait

The UI needs to be able to scale between landscape and portrait orientations. In non-fullscreen IME mode, leave sufficient space for the application to show the text field and any associated context. Preferably, no more than half the screen should be occupied by the IME. In fullscreen IME mode this is not an issue.

Sending text to the application

There are two ways to send text to the application. You can either send individual key events or you can edit the text around the cursor in the application's text field.

To send a key event, you can simply construct KeyEvent objects and call InputConnection.sendKeyEvent(). Here are some examples:

InputConnection ic = getCurrentInputConnection();
long eventTime = SystemClock.uptimeMillis();
ic.sendKeyEvent(new KeyEvent(eventTime, eventTime,
KeyEvent.ACTION_DOWN, keyEventCode, 0, 0, 0, 0,
ic.sendKeyEvent(new KeyEvent(SystemClock.uptimeMillis(), eventTime,
KeyEvent.ACTION_UP, keyEventCode, 0, 0, 0, 0,

Or use the convenience method:


Note: It is recommended to use the above method for certain fields such as phone number fields because of filters that may be applied to the text after each key press. Return key and delete key should also be sent as raw key events for certain input types, as applications may be watching for specific key events in order to perform an action.

When editing text in a text field, some of the more useful methods on android.view.inputmethod.InputConnection are:

  • getTextBeforeCursor()
  • getTextAfterCursor()
  • deleteSurroundingText()
  • commitText()

For example, let's say the text "Fell" is to the left of the cursor. And you want to replace it with "Hello!":

InputConnection ic = getCurrentInputConnection();
ic.deleteSurroundingText(4, 0);
ic.commitText("Hello", 1);
ic.commitText("!", 1);

Composing text before committing

If your input method does some kind of text prediction or requires multiple steps to compose a word or glyph, you can show the progress in the text field until the user commits the word and then you can replace the partial composition with the completed text. The text that is being composed will be highlighted in the text field in some fashion, such as an underline.

InputConnection ic = getCurrentInputConnection();
ic.setComposingText("Composi", 1);
ic.setComposingText("Composin", 1);
ic.commitText("Composing ", 1);

Intercepting hard key events

Even though the input method window doesn't have explicit focus, it receives hard key events first and can choose to consume them or forward them along to the application. For instance, you may want to consume the directional keys to navigate within your UI for candidate selection during composition. Or you may want to trap the back key to dismiss any popups originating from the input method window. To intercept hard keys, override InputMethodService.onKeyDown() and InputMethodService.onKeyUp(). Remember to call super.onKey* if you don't want to consume a certain key yourself.

Other considerations

  • Provide a way for the user to easily bring up any associated settings directly from the input method UI
  • Provide a way for the user to switch to a different input method (multiple input methods may be installed) directly from the input method UI.
  • Bring up the UI quickly - preload or lazy-load any large resources so that the user sees the input method quickly on tapping on a text field. And cache any resources and views for subsequent invocations of the input method.
  • On the flip side, any large memory allocations should be released soon after the input method window is hidden so that applications can have sufficient memory to run. Consider using a delayed message to release resources if the input method is in a hidden state for a few seconds.
  • Make sure that most common characters can be entered using the input method, as users may use punctuation in passwords or user names and they shouldn't be stuck in a situation where they can't enter a certain character in order to gain access into a password-locked device.


For a real world example, with support for multiple input types and text prediction, see the LatinIME source code. The Android 1.5 SDK also includes a SoftKeyboard sample as well.

Learn about Android 1.5 and more at Google I/O. Members of the Android team will be there to give a series of in-depth technical sessions and to field your toughest questions.


[Gd] Who's @ Google I/O - spotlight on Google Web Toolkit

| More

Google Code Blog: Who's @ Google I/O - spotlight on Google Web Toolkit

Google Web Toolkit, or GWT for short, recently went live with their 1.6 release, which also included a Google plugin for Eclipse and integration with App Engine's Java language support. Google I/O will be rich with GWT content, including a number of sessions on improving productivity and app performance with GWT. In addition, there will be a number of external GWT developers leading some of these sessions and/or part of the Developer Sandbox.

As mentioned last week, we're giving you a closer look at developers who'll be presenting or demoing at I/O. Here is a taste of these GWT developers below. (New to GWT? Check out this overview)
  • JBoss, a Division of Red Hat
    JBoss is well-known by developers for their enterprise open source middleware. Red Hat developer communities such as the Fedora Project and have collaborated with Google on a number of developer initiatives over the years including Google Summer of Code, Hibernate Shards, integration with Drools and the Seam Framework and Google Gadgets integration with JBoss Portal. JBoss will be present at the Developer Sandbox.

  • Timefire
    Timefire produces highly scalable, interactive visualizations of up to millions of data points for business intelligence, analytics, finance, sensor networks, and other industries in what they like to call "Google Maps, but for the time dimension." Their platform's built on Google Web Toolkit from the ground up, but also runs natively on Android. Timefire also uses App Engine's new Java language support for their social charting tool, Gadgets, OpenSocial, GData, Google Maps, GViz, YouTube Player API, and Protocol Buffers. Ray Cromwell will be at the Developer Sandbox as well as speaking on 2 sessions - Building Applications on the Google OpenStack and Progressively Enhance AJAX Applications with Google Web Toolkit and GQuery

  • StudyBlue
    StudyBlue is an academic network which enables students to connect with each other and offers study tools. StudyBlue's website is built entirely with GWT. According to StudyBlue, GWT allows for complete AJAX integration without sacrificing usability or integration capabilities. StudyBlue will be at the Sandbox.

  • Lombardi Blueprint
    Lombardi Blueprint is a cloud-based process discovery and documentation platform accessible from any browser. They've used GWT since early 2007 to write the client side of Lombardi Blueprint. GWT has enabled Lombardi to focus on writing and maintaining their Java code, while taking care of creating the browser-specific optimized AJAX for them. Alex Moffat and Damon Lundin will be at the Developer Sandbox as well as leading a session, Effective GWT: Developing a complex, high-performance app with Google Web Toolkit. (Check out Alex Moffat's video about Lombardi's use of GWT)
Finally, one little known fact - a number of Google products were developed with the help of GWT. This includes Google Moderator, Health, Checkout, Image Labeler, and Base.

Don't forget - early registration for Google I/O ends May 1. This means $100 off the standard list price (and a copy of the Chrome comic book). To register, check out the latest sessions, or see more developers who'll be presenting at I/O, visit

*Follow us for the latest I/O updates: @googleio.

By Christine Tsai, Google Developer Products

Tuesday, April 21, 2009

[Gd] Updating Applications for On-screen Input Methods

| More

Android Developers Blog: Updating Applications for On-screen Input Methods

One of the major new features we are introducing in Android 1.5 is our Input Method Framework (IMF), which allows developers on-screen input methods such as software keyboards. This article will provide an overview of what Android input method editors (IMEs) are, and what an application developer needs to do to work well with them. The IMF allows for a new class of Android devices, such as those without a hardware keyboard, so it is important that your application work well with it to provide the users of such devices a great experience.

What is an input method?

The Android IMF is designed to support a variety of IMEs, including soft keyboard, hand-writing recognizers, and hard keyboard translators. Our focus, however, will be on soft keyboards, since this is the kind of input method that is currently part of the platform.

A user will usually access the current IME by tapping on a text view to edit, as shown here in the home screen:

The soft keyboard is positioned at the bottom of the screen over the application's window. To organize the available space between the application and IME, we use a few approaches; the one shown here is called pan and scan, and simply involves scrolling the application window around so that the currently focused view is visible. This is the default mode, since it is the safest for existing applications.

Most often the preferred screen layout is a resize, where the application's window is resized to be entirely visible. An example is shown here, when composing an e-mail message:

The size of the application window is changed so that none of it is hidden by the IME, allowing full access to both the application and IME. This of course only works for applications that have a resizeable area that can be reduced to make enough space, but the vertical space in this mode is actually no less than what is available in landscape orientation, so very often an application can already accommodate it.

The final major mode is fullscreen or extract mode. This is used when the IME is too large to reasonably share space with the underlying application. With the standard IMEs, you will only encounter this situation when the screen is in a landscape orientation, although other IMEs are free to use it whenever they desire. In this case the application window is left as-is, and the IME simply displays fullscreen on top of it, as shown here:

Because the IME is covering the application, it has its own editing area, which shows the text actually contained in the application. There are also some limited opportunities the application has to customize parts of the IME (the "done" button at the top and enter key label at the bottom) to improve the user experience.

Basic XML attributes for controlling IMEs

There are a number of things the system does to try to help existing applications work with IMEs as well as possible, such as:

  • Use pan and scan mode by default, unless it can reasonably guess that resize mode will work by the existence of lists, scroll views, etc.
  • Analyze the various existing TextView attributes to guess at the kind of content (numbers, plain text, etc) to help the soft keyboard display an appropriate key layout.
  • Assign a few default actions to the fullscreen IME, such as "next field" and "done".

There are also some simple things you can do in your application that will often greatly improve its user experience. Note that, except where explicitly mentioned, all of the things suggested here will not tie your application to Android 1.5 -- it will still work on older releases, which will simply ignore these new options.

Specifying each EditText control's input type

The most important thing for an application to do is use the new android:inputType attribute on each EditText, which provides much richer information about the text content. This attribute actually replaces many existing attributes (android:password, android:singleLine, android:numeric, android:phoneNumber, android:capitalize, android:autoText, android:editable); if you specify both, Cupcake devices will use the new android:inputType attribute and ignore the others.

The input type attribute has three pieces:

  • The class is the overall interpretation of characters. The currently supported classes are text (plain text), number (decimal number), phone (phone number), and datetime (a date or time).
  • The variation is a further refinement on the class. In the attribute you will normally specify the class and variant together, with the class as a prefix. For example, textEmailAddress is a text field where the user will enter something that is an e-mail address ( so the key layout will have an '@' character in easy access, and numberSigned is a numeric field with a sign. If only the class is specified, then you get the default/generic variant.
  • Additional flags can be specified that supply further refinement. These flags are specific to a class. For example, some flags for the text class are textCapSentences, textAutoCorrect, and textMultiline.

As an example, here is the new EditText for the IM application's message text view:

    <EditText android:id="@+id/edtInput"

A full description of all of the input types can be found in the documentation. It is important to make use of the correct input types that are available, so that the soft keyboard can use the optimal keyboard layout for the text the user will be entering.

Enabling resize mode and other window features

The next most important thing for you to do is specify the overall behavior of your window in relation to the input method. The most visible aspect of this is controlling resize vs. pan and scan mode, but there are other things you can do as well to improve your user experience.

You will usually control this behavior through the android:windowSoftInputMode attribute on each <activity> definition in your AndroidManifest.xml. Like the input type, there are a couple different pieces of data that can be specified here by combining them together:

  • The window adjustment mode is specified with either adjustResize or adjustPan. It is highly recommended that you always specify one or the other.
  • You can further control whether the IME will be shown automatically when your activity is displayed and other situations where the user moves to it. The system won't automatically show an IME by default, but in some cases it can be convenient for the user if an application enables this behavior. You can request this with stateVisible. There are also a number of other state options for finer-grained control that you can find in the documentation.

A typical example of this field can be see in the edit contact activity, which ensures it is resized and automatically displays the IME for the user:

    <activity name="EditContactActivity"

For non-activity windows, there is a new Window.setSoftInputMode() method that can be used to control their behavior. Note that calling this API will make your application incompatible with previous Android platforms.

Controlling the action buttons

The final customization we will look at is the "action" buttons in the IME. There are currently two types of actions:

  • The enter key on a soft keyboard is typically bound to an action when not operating on a mult-line edit text. For example, on the G1 pressing the hard enter key will typically move to the next field or the application will intercept it to execute an action; with a soft keyboard, this overloading of the enter key remains, since the enter button just sends an enter key event.
  • When in fullscreen mode, an IME may also put an additional action button to the right of the text being edited, giving the user quick access to a common application operation.

These options are controlled with the android:imeOptions attribute on TextView. The value you supply here can be any combination of:

  • One of the pre-defined action constants (actionGo, actionSearch, actionSend, actionNext, actionDone). If none of these are specified, the system will infer either actionNext or actionDone depending on whether there is a focusable field after this one; you can explicitly force no action with actionNone.
  • The flagNoEnterAction option tells the IME that the action should not be available on the enter key, even if the text itself is not multi-line. This avoids having unrecoverable actions like (send) that can be accidentally touched by the user while typing.
  • The flagNoAccessoryAction removes the action button from the text area, leaving more room for text.
  • The flagNoExtractUi completely removes the text area, allowing the application to be seen behind it.

The previous IM application message view also provides an example of an interesting use of imeOptions, to specify the send action but not let it be shown on the enter key:


APIs for controlling IMEs

For more advanced control over the IME, there are a variety of new APIs you can use. Unless special care is taken (such as by using reflection), using these APIs will cause your application to be incompatible with previous versions of Android, and you should make sure you specify android:minSdkVersion="3" in your manifest.

The primary API is the new android.view.inputmethod.InputMethodManager class, which you can retrieve with Context.getSystemService(). It allows you to interact with the global input method state, such as explicitly hiding or showing the current IME's input area.

There are also new window flags controlling input method interaction, which you can control through the existing Window.addFlags() method and new Window.setSoftInputMode() method. The PopupWindow class has grown corresponding methods to control these options on its window. One thing in particular to be aware of is the new WindowManager.LayoutParams.FLAG_ALT_FOCUSABLE_IM constant, which is used to control whether a window is on top of or behind the current IME.

Most of the interaction between an active IME and application is done through the android.view.inputmethod.InputConnection class. This is the API an application implement, which an IME calls to perform the appropriate edit operations on the application. You won't normally need to worry about this, since TextView provides its own implementation for itself.

There are also a handful of new View APIs, the most important of these being onCreateInputConnection() which creates a new InputConnection for an IME (and fills in an android.view.inputmethod.EditorInfo structure with your input type, IME options, and other data); again, most developers won't need to worry about this, since TextView takes care of it for you.

Learn about Android 1.5 and more at Google I/O. Members of the Android team will be there to give a series of in-depth technical sessions and to field your toughest questions.


[Gd] Google Analytics Data Export API has Launched!

| More

Official Google Data APIs Blog: Google Analytics Data Export API has Launched!

We are very excited to announce a new member to the Google Data API family, Google Analytics! For those of you who don't know, Analytics is a powerful web analysis tool that provides incredible amounts of data about where visitors come from, what they do while on your site and where they go from there. The best part is that it's free for everyone!

The new Google Analytics Data Export API is now publicly available to all Analytics users as a Labs API that provides an easy to use way to get read-only access to your Analytics data. All report data that is available to you through the web interface will also be available through this new Google Data API. In addition to the standard Google Data API protocol of making requests over HTTP and accessing your data in XML, we will also be providing both a Java and JavaScript client library to make it even easier to integrate with your Analytics information.

With the availability of this API, you all now have a standardized way to integrate your Google Analytics Data with your own business data to extend existing products or create new standalone applications. Want to see custom views of your Analytics data? Create your own dashboards and gadgets that pull from the Analytics API. Want features that aren't included in the web interface? Build them yourself instead of waiting for them to be developed. Take a look at this Android application from Actual Metrics or this desktop application from Desktop-Reporting to see examples of what some developers have already done.

To dive in and begin writing your own apps, make sure you go to the Analytics API section of the Google Code website to find all of the necessary documentation. For key announcements, code changes and updates, sign up for the Google Analytics API Notify e-mail group which we promise will only send out e-mails when there is something that directly affects developers. Lastly, to share ideas and and get feedback from other developers, join the Google Analytics API Group.

For more details on Google Analytics and the new API, check out the Analytics blog. For more information about building gadgets with the JavaScript library and other topics related to the Google Data APIs, make sure to check out the website for our developer conference, Google I/O, which will be taking place from May 27-28th in San Francisco.

Posted by Nick Mihailovski, Google Analytics API Team

[Gd] Attention Developers: Google Analytics now has an API!

| More

Google Code Blog: Attention Developers: Google Analytics now has an API!

Today we are pleased to announce the launch the Google Analytics Data Export API. This new API is being launched in Labs and is available to all Analytics users. If you haven't already heard, Google Analytics, is a free, powerful web analytics tool that provides a wealth of data about how visitors find your website, where they go and if they turn into customers.

So what's so exciting about this API?

The Analytics API will allow developers to extend Google Analytics in new and creative ways that benefit developers, organizations and end users. Large organizations and agencies now have a standardized platform for integrating Analytics data with other business data. Developers can integrate Google Analytics into their existing products and create standalone Google Analytics applications. Users could see snapshots of their Analytics data in developer created dashboards and gadgets. For example, how would you like to access Google Analytics from your phone? Now you can, with this Android application from Actual Metrics. How about accessing Analytics from your desktop? It's here from Desktop-Reporting.

So how does it work?

We made the API very easy to use. First, there are no complicated developer tokens, you only need to request an authentication token. Second the Analytics Export API is free and available for all Google Analytics users. The Analytics API is a GData API which is based on the Atom 1.0 and RSS 2.0 syndication formats. This is the same API protocol for Google Calendar, Finance and Webmaster Tools. If you've used any of these APIs in the past, the Analytics Export API will look very familiar to you.

Accessing your Google Analytics data generally follows these three steps:
  • Request an authentication token from your Google Account
  • Create a URL with the data you'd like to get back from the API
  • Make an HTTP request to the Export API using the authentication token and the URL you created
Currently the Google Analytics API supports two GData feeds: an Account Feed (which lists all the Google Analytics accounts and profiles you have access to) and a Data Feed (which allows access to all the data available through the GA interface). The Analytics data feed is very powerful and allows you to query which GA dimensions and metrics you want to access, for a specified date range and even across a subset of data.

So it's now simple to access data GA data to answer questions like:
  • What are the top referral sources by conversions to my site?
  • What are the top browser language settings in the United States vs. the United Kingdom?
  • What are the top keyword refinements and destination pages being used on my internal site search?
How do I get started?

There are three key resources you'll want to use when you start developing on top of the Google Analytics API. First we've provided two client libraries to abstract and simplify the process. The Java client library is available in the GData client library. And a JavaScript client library is now available through the Google AJAX APIs GData loader. We're also working on supporting more programming languages. In the meantime, for any programming language you want to use, you can make requests directly to the API over HTTP and access the data in XML. You can find example code, a developer guide, FAQ, and the complete API reference at Google Code.

Second, be sure to sign up for the Google Analytics API Notify email group so you get the key announcements on feature updates, code changes and other service related news that relate to the API. (Don't worry, this will be a low-traffic email list and we promise to only send emails when there is something important that affects developers.)

Finally, you'll want to become a part of the Google Analytics Developer community by joining the Google Analytics APIs Group for developers. This user forum is a great way to share ideas and get feedback from other developers. We also check in on these forums so let us know what you think about the API there, and share your ideas and your applications with us. We look forward to seeing your creativity!

By Nick Mihailovski and the Google Analytics API Team

[Gd] Guest post: 3D graphics in the browser

| More

Chromium Blog: Guest post: 3D graphics in the browser

Today, we shared with the open source community an early version of O3D, a new shader-based API for 3D graphics in the browser. We are excited about this release: we believe that a 3D API for the web will  allow web developers to create powerful, immersive 3D apps, that are comparable to the experience offered by client applications and game consoles. This will make the web better, not to mention more fun! 

O3D is still at an early stage and is not a part of the Chromium code base. However, we hope that, combined with projects like Mozilla's Canvas 3D, it will encourage the discussion within the graphics and web communities about a new open web standard on 3D graphics for the web. With JavaScript (and browsers) becoming faster every day, we believe it is the right time for such a standard to emerge. To help you participate in this broader discussion, Google has created a forum where you can submit suggestions on what features a 3D API for the web should have. 

If you are interested to learn more about O3D, you can visit us at

A video of the O3D Beach Demo

Posted by Henry Bridge, O3D Product Manager and Gregg Tavares, Software Engineer

[Gd] Modifying Chow-Down Part 2: Make it Faster!

| More

YouTube API Blog: Modifying Chow-Down Part 2: Make it Faster!

As promised in the previous blog post, I was given the task of making our Chow Down Gdata sample a little faster. It would sometimes take a while to load restaurant information due to the requests to YouTube and Picasa Web Albums taking a while to be processed on the backend. Since this would cause the user to sometimes see a "loading" bar for several seconds; something had to be done.

Originally, the application used the Python client library to retrieve information from YouTube and PWA and then stored it inside of memcache. The new solution is to instead retrieves these feeds directly in the browser using the json-in-script support of the Google Data APIs. This approach worked well for the Chow Down application since we were not retrieving any private information for use in this application and so did not need to authenticate as a user.

Another benefit of using the JSON feeds is that the browser can asynchronously request results from both YouTube and PWA at the same time and render the results as soon as they are returned. This helps decrease the "perceived load time" that the user experiences since they are seeing information start to be loaded instead of just watching a progress bar.

The code for the entire sample is available on

But you can see all of the logic for retrieving the feeds using JavaScript in the ajax_restaurant_info template:

The code makes use of the jQuery JavaScript library in order to remain compact and compatible.

So if your site is using the Data APIs of one or more Google properties and you don't need authentication, considering switching to the JSON feeds to improve perceived latency and let your pages load the AJAXy way.

[Gd] Toward an open web standard for 3D graphics (part 2): Introducing O3D

| More

Google Code Blog: Toward an open web standard for 3D graphics (part 2): Introducing O3D

Most content on the web today is in 2D, but a lot of information is more fun and useful in 3D. Projects like Google Earth and SketchUp demonstrate our passion and commitment to enabling users to create and interact with 3D content. We'd like to see the web offering the same type of 3D experiences that can be found on the desktop. That's why, a few weeks ago, we announced our plans to contribute our technology and web development expertise to the discussions about 3D for the web within Khronos and the broader developer community.

Today, we're making our first contribution to this effort by sharing the plugin implementation of O3D: a new, shader-based, low-level graphics API for creating interactive 3D applications in a web browser. When we started working on O3D, we focused on creating a modern 3D API that's optimized for the web. We wanted to build an API that runs on multiple operating systems and browsers, performs well in JavaScript, and offers the capabilities developers need to create a diverse set of rich applications. O3D is still in an early stage, but we're making it available now to help inform the public discussion about 3D graphics in the browser. We've also created a forum to enable developers to submit suggestions on features and functionality they desire from a 3D API for the web.

If you are interested in learning more about O3D, you can visit our site, subscribe to our blog and join our discussion groups. We also invite you to join us at Google I/O (May 27th -28th), where you can see presentations about O3D and meet with the team.

A video of the O3D Beach Demo:

By Matt Papakipos, Director of Engineering and Vangelis Kokkevis, Software Engineer

Monday, April 20, 2009

[Gd] Introducing home screen widgets and the AppWidget framework

| More

Android Developers Blog: Introducing home screen widgets and the AppWidget framework

One exciting new feature in the Android 1.5 SDK is the AppWidget framework which allows developers to write "widgets" that people can drop onto their home screen and interact with. Widgets can provide a quick glimpse into full-featured apps, such as showing upcoming calendar events, or viewing details about a song playing in the background.

When widgets are dropped onto the home screen, they are given a reserved space to display custom content provided by your app. Users can also interact with your app through the widget, for example pausing or switching music tracks. If you have a background service, you can push widget updates on your own schedule, or the AppWidget framework provides an automatic update mechanism.

At a high level, each widget is a BroadcastReceiver paired with XML metadata describing the widget details. The AppWidget framework communicates with your widget through broadcast intents, such as when it requests an update. Widget updates are built and sent using RemoteViews which package up a layout and content to be shown on the home screen.

Widget screenshotYou can easily add widgets into your existing app, and in this article I'll walk through a quick example: writing a widget to show the Wiktionary "Word of the day." The full source code is available, but I'll point out the AppWidget-specific code in detail here.

First, you'll need some XML metadata to describe the widget, including the home screen area you'd like to reserve, an initial layout to show, and how often you'd like to be updated. The default Android home screen uses a cell-based layout, so it rounds your requested size up to the next-nearest cell size. This can be a little confusing, so here's a quick equation to help:

Minimum size in dip = (Number of cells * 74dip) - 2dip

In this example, we want our widget to be 2 cells wide and 1 cell tall, which means we should request a minimum size 146dip x 72dip. We're also going to request updates once per day, which is roughly every 86,400,000 milliseconds. Here's what our widget XML metadata looks like:


Next, let's pair this XML metadata with a BroadcastReceiver in the AndroidManifest:

<!-- Broadcast Receiver that will process AppWidget updates -->
<receiver android:name=".WordWidget" android:label="@string/widget_name">
<action android:name="android.appwidget.action.APPWIDGET_UPDATE" />
<meta-data android:name="android.appwidget.provider" android:resource="@xml/widget_word" />

<!-- Service to perform web API queries -->
<service android:name=".WordWidget$UpdateService" />

Finally, let's write the BroadcastReceiver code to actually handle AppWidget requests. To help widgets manage all of the various broadcast events, there is a helper class called AppWidgetProvider, which we'll use here. One very important thing to notice is that we're launching a background service to perform the actual update. This is because BroadcastReceivers are subject to the Application Not Responding (ANR) timer, which may prompt users to force close our app if it's taking too long. Making a web request might take several seconds, so we use the service to avoid any ANR timeouts.

* Define a simple widget that shows the Wiktionary "Word of the day." To build
* an update we spawn a background {@link Service} to perform the API queries.
public class WordWidget extends AppWidgetProvider {
public void onUpdate(Context context, AppWidgetManager appWidgetManager,
int[] appWidgetIds) {
// To prevent any ANR timeouts, we perform the update in a service
context.startService(new Intent(context, UpdateService.class));

public static class UpdateService extends Service {
public void onStart(Intent intent, int startId) {
// Build the widget update for today
RemoteViews updateViews = buildUpdate(this);

// Push update for this widget to the home screen
ComponentName thisWidget = new ComponentName(this, WordWidget.class);
AppWidgetManager manager = AppWidgetManager.getInstance(this);
manager.updateAppWidget(thisWidget, updateViews);

* Build a widget update to show the current Wiktionary
* "Word of the day." Will block until the online API returns.
public RemoteViews buildUpdate(Context context) {
// Pick out month names from resources
Resources res = context.getResources();
String[] monthNames = res.getStringArray(R.array.month_names);

// Find current month and day
Time today = new Time();

// Build today's page title, like "Wiktionary:Word of the day/March 21"
String pageName = res.getString(R.string.template_wotd_title,
monthNames[today.month], today.monthDay);
RemoteViews updateViews = null;
String pageContent = "";

try {
// Try querying the Wiktionary API for today's word
pageContent = SimpleWikiHelper.getPageContent(pageName, false);
} catch (ApiException e) {
Log.e("WordWidget", "Couldn't contact API", e);
} catch (ParseException e) {
Log.e("WordWidget", "Couldn't parse API response", e);

// Use a regular expression to parse out the word and its definition
Pattern pattern = Pattern.compile(SimpleWikiHelper.WORD_OF_DAY_REGEX);
Matcher matcher = pattern.matcher(pageContent);
if (matcher.find()) {
// Build an update that holds the updated widget contents
updateViews = new RemoteViews(context.getPackageName(), R.layout.widget_word);

String wordTitle =;
updateViews.setTextViewText(, wordTitle);

// When user clicks on widget, launch to Wiktionary definition page
String definePage = res.getString(R.string.template_define_url,
Intent defineIntent = new Intent(Intent.ACTION_VIEW, Uri.parse(definePage));
PendingIntent pendingIntent = PendingIntent.getActivity(context,
0 /* no requestCode */, defineIntent, 0 /* no flags */);
updateViews.setOnClickPendingIntent(, pendingIntent);

} else {
// Didn't find word of day, so show error message
updateViews = new RemoteViews(context.getPackageName(), R.layout.widget_message);
CharSequence errorMessage = context.getText(R.string.widget_error);
updateViews.setTextViewText(, errorMessage);
return updateViews;

public IBinder onBind(Intent intent) {
// We don't need to bind to this service
return null;

And there you have it, a simple widget that will show the Wiktionary "Word of the day." When an update is requested, we read the online API and push the newest data to the surface. The AppWidget framework automatically requests updates from us as needed, such as when a new widget is inserted, and again each day to load the new "Word of the day."

Finally, some words of wisdom. Widgets are designed for longer-term content that doesn't update very often, and updating more frequently than every hour can quickly eat up battery and bandwidth. Consider updating as infrequently as possible, or letting your users pick a custom update frequency. For example, some people might want a stock ticker to update every 15 minutes, or maybe only four times a day. I'll be talking about additional strategies for saving battery life as part of a session I'm giving at Google I/O.

One last cool thing to mention is that the AppWidget framework is abstracted in both directions, meaning alternative home screens can also contain widgets. Your widgets can be inserted into any home screen that supports the AppWidget framework.

We've already written several widgets ourselves, such as the Calendar and Music widgets, but we're even more excited to see the widgets you'll write!