Friday, June 10, 2011

[Gd] Go play in the sandbox!

| More

OpenSocial API Blog: Go play in the sandbox!

OpenSocial 2.0 Container Open to All!
At the OpenSocial State of the Union on May 12th we (Andrew Davis (IBM) and Ryan Baxter (IBM)) announced that we have built a sandbox environment for app developers to test their apps agains the OpenSocial 2.0 specification.

We have built a sample collaboration application that contains an activity stream, inbox, and an area to render arbitrary gadgets.  The sandbox is based on a daily build of Shindig 3.0 which is the reference implementation for OpenSocial 2.0. Gadget developers who are interested in building applications for OpenSocial 2.0 containers can go to the sandbox and add any gadget they please to test it out.

Future Plans

We plan to continue to work on the sandbox to refine it and make it more complete.  The social data in the collaboration application is not very robust, and we'd like to include a more complete set so developers can better test their applications.  If you've got ideas, let us know--there's lots of ways you can help!

The data in the inbox and activity stream is static and will remain that way but they plan to add functionality so developers can add their own activities and emails for a given session.  This will be key for testing OpenSocial gadgets that use embedded experiences. There are also several pieces of the OpenSocial specification which have not been implemented in the container yet.  For example, gadget preferences, pubsub2, and some gadget to container APIs like gadget.window.setTitle.  

Over the coming weeks and months we'll work implementing the missing pieces from the specification so the sandbox is more complete.  We've also reached out to the Shindig community so a daily build of Shindig is automatically deployed to the server.  This will allow app developers to develop gadgets against cutting edge OpenSocial proposals. At some point the application will become part of Shindig, allowing the community to continue to enhance the sandbox as the specification progresses.  

We would love to hear your feedback so please post your ideas to speciification's Google group.

Posted on behalf of Ryan Baxter and Andrew Davis by Mark Weitzel, President, OpenSocial Foundation

Wednesday, June 8, 2011

[Gd] Dev Channel Update

| More

Google Chrome Releases: Dev Channel Update

The Dev channel has been updated to 13.0.782.13 for Windows, Mac, Linux, and Chrome Frame.  This release contains a number of UI tweaks and stabilities fixes.  The full list of changes is available in the SVN revision log.  Interested in switching to the Dev channel?  Find out how.  If you find a new issue, please let us know by filing a bug.

Anthony Laforge
Google Chrome

[Gd] Dev Channel Update

| More

Google Chrome Releases: Dev Channel Update

The Dev channel has been updated to 13.0.782.11 for Windows, Mac, and Chrome Frame.  This release contains a number of UI tweaks and stabilities fixes.  The full list of changes is available in the SVN revision log.  Interested in switching to the Dev channel?  Find out how.  If you find a new issue, please let us know by filing a bug.

Anthony Laforge
Google Chrome

[Gd] Chrome Stable Release

| More

Google Chrome Releases: Chrome Stable Release

The Google Chrome team is happy to announce the release of Chrome 12 to the Stable Channel for all platforms.  Chrome 12.0.742.91 includes a number of new features and updates, including:
  • Hardware accelerated 3D CSS
  • New Safe Browsing protection against downloading malicious files
  • Ability to delete Flash cookies from inside Chrome
  • Launch Apps by name from the Omnibox
  • Integrated Sync into new settings pages
  • Improved screen reader support
  • New warning when hitting Command-Q on Mac
  • Removal of Google Gears
Security fixes and rewards:
Please see the Chromium security page for more detail. Note that the referenced bugs may be kept private until a majority of our users are up to date with the fix.
  • [$2000] [73962] [79746] High CVE-2011-1808: Use-after-free due to integer issues in float handling. Credit to miaubiz.
  • [75496] Medium CVE-2011-1809: Use-after-free in accessibility support. Credit to Google Chrome Security Team (SkyLined).
  • [75643] Low CVE-2011-1810: Visit history information leak in CSS. Credit to Jesse Mohrland of Microsoft and Microsoft Vulnerability Research (MSVR).
  • [76034] Low CVE-2011-1811: Browser crash with lots of form submissions. Credit to “DimitrisV22”.
  • [$1337] [77026] Medium CVE-2011-1812: Extensions permission bypass. Credit to kuzzcc.
  • [78516] High CVE-2011-1813: Stale pointer in extension framework. Credit to Google Chrome Security Team (Inferno).
  • [79362] Medium CVE-2011-1814: Read from uninitialized pointer. Credit to Eric Roman of the Chromium development community.
  • [79862] Low CVE-2011-1815: Extension script injection into new tab page. Credit to kuzzcc.
  • [80358] Medium CVE-2011-1816: Use-after-free in developer tools. Credit to kuzzcc.
  • [$500] [81916] Medium CVE-2011-1817: Browser memory corruption in history deletion. Credit to Collin Payne.
  • [$1000] [81949] High CVE-2011-1818: Use-after-free in image loader. Credit to miaubiz.
  • [$1000] [83010] Medium CVE-2011-1819: Extension injection into chrome:// pages. Credit to Vladislavas Jarmalis, plus subsequent independent discovery by Sergey Glazunov.
  • [$3133.7] [83275] High CVE-2011-2332: Same origin bypass in v8. Credit to Sergey Glazunov.
  • [$1000] [83743] High CVE-2011-2342: Same origin bypass in DOM. Credit to Sergey Glazunov.
In addition, we would like to thank David Levin of the Chromium development community, miaubiz, Christian Holler and Martin Barbella for working with us in the development cycle and preventing bugs from ever reaching the stable channel. Various rewards were issued.

We’d also like to call particular attention to Sergey Glazunov’s $3133.7 reward. Although the linked bug is not of critical severity, it was accompanied by a beautiful chain of lesser severity bugs which demonstrated critical impact. It deserves a more detailed write-up at a later date.

You can find out more about Chrome 12 at the official Chrome Blog.  The full list of changes is available in the SVN revision logs (Trunk, Branch).  Interested in switching to the Stable channel?  Find out how.  If you find a new issue, please let us know by filing a bug.

Jason Kersey
Google Chrome

[Gd] Beta Channel Update

| More

Google Chrome Releases: Beta Channel Update

The Chrome Beta channel has been updated to 12.0.742.91 for all platforms.  This release contains additional stability fixes.  Interested in switching to the Beta channel?  Find out how.  If you find a new issue, please let us know by filing a bug.

Jason Kersey
Google Chrome

[Gd] Changes to Required Minimum Functionality for AdWords API Clients

| More

AdWords API Blog: Changes to Required Minimum Functionality for AdWords API Clients

Today we released the v060611 update to the Required Minimum Functionality (RMF) for AdWords API Clients. As a reminder, the goal of the RMF is to ensure that advertisers have access to many of the unique and valuable features that AdWords has to offer. We update the Required Minimum Functionality list approximately every six months, and the last update was over 9 months ago in August 2010.

Note: End-Advertiser-Only AdWords API Clients and Internal-Only AdWords API Clients (each as defined in the Terms & Conditions) are not required to fully implement the required AdWords API features outlined below, and described in detail here.

All other AdWords API Clients (as defined in the Terms & Conditions) must fully implement the required AdWords API features outlined below, and described in detail here, by October 15, 2011.

The key changes we have made in this update to the RMF are:
  • Added more features and report types
          We have added several features that we believe to be of high value for advertisers. These include sitelinks, conversion tracking, conversion optimizer, and enhanced CPC. We have also added new reports such as search performance and geographic reports.

  • Requirements for use of the BulkOpportunityService, TrafficEstimatorService and TargetingIdeaService
          BulkOpportunityService, TargetingIdeaService and TrafficEstimatorService are useful services to get new ideas for expanding and refining an AdWords account. We have added a requirement that if you use any of these services you must also implement creation functionality and management functionality for either Search or Display.

  • Advance Notice of Planned Changes
          We feel strongly that our advertisers should have the best possible user experience with AdWords and therefore have added the requirement that you send us mocks for any material changes to your tool at least 2 weeks in advance of making these changes. These mocks can be sent to: Note that you don’t need to wait for approval to launch these changes.

More details on all these changes are available here. To maintain RMF compliance, all new required features must be added by October 15, 2011. If you have any questions about RMF please submit them to the forum.

Also, per our terms and conditions, Google retains the right to modify the terms and conditions at any time. In an ongoing effort to improve clarity, we have updated the AdWords API Terms & Conditions. For example, we are now more explicit in allowing third-party tools to enable bulk editing of common fields (like keyword bids, or campaign budgets) across networks. Please review the complete Terms & Conditions here. Note that continued use of the API means that you accept these terms. You can refuse to accept the terms by ceasing to use the API.

-The Google AdWords API Team

[Gd] Add Gesture Search to your Android apps

| More

The official Google Code blog: Add Gesture Search to your Android apps

By Yang Li, Research Scientist

Gesture Search from Google Labs now has an API. You can use the API to easily integrate Gesture Search into your Android apps, so your users can gesture to write text and search for application-specific data. For example, a mobile ordering application for a restaurant might have a long list of menu items; with Gesture Search, users can draw letters to narrow their search.

Another way to use Gesture Search is to enable users to select options using gestures that correspond to specific app functions, like a touch screen version of keyboard shortcuts, rather than forcing hierarchical menu navigation.

In this post, I’ll demonstrate how we can embed Gesture Search (1.4.0 or later) into an Android app that enables a user to find information about a specific country. To use Gesture Search, we first need to create a content provider named CountryProvider, according to the format required by Android Search framework. This content provider consists of 238 country names.

Then, in GestureSearchAPIDemo, the main activity of the app, we invoke Gesture Search when a user selects a menu item. (Gesture Search can be invoked in other ways depending on specific applications.) To do this, we create an Intent with the action "" and the URI of the content provider. If the data is protected (for example, see AndroidManifest.xml), we also need to grant read permission for the content URI to Gesture Search. We then call startActivityForResult to invoke Gesture Search.
public boolean onCreateOptionsMenu(Menu menu) {
menu.add(0, GESTURE_SEARCH_ID, 0, R.string.menu_gesture_search)
.setShortcut('0', 'g').setIcon(android.R.drawable.ic_menu_search);
return true;

public boolean onOptionsItemSelected(MenuItem item) {
switch (item.getItemId()) {
try {
Intent intent = new Intent();
intent.putExtra(SHOW_MODE, SHOW_ALL);
intent.putExtra(THEME, THEME_LIGHT);
startActivityForResult(intent, GESTURE_SEARCH_ID);
} catch (ActivityNotFoundException e) {
Log.e("GestureSearchExample", "Gesture Search is not installed");
return super.onOptionsItemSelected(item);
In the code snippet above, we also specify that we want to show all of the country names when Gesture Search is brought up by intent.putExtra(SHOW_MODE, SHOW_ALL). The parameter name and its possible values are defined as follows:
* Optionally, specify what should be shown when launching Gesture Search.
* If this is not specified, SHOW_HISTORY will be used as a default value.
private static String SHOW_MODE = "show";
/** Possible values for invoking mode */
// Show the visited items
private static final int SHOW_HISTORY = 0;
// Show nothing (a blank screen)
private static final int SHOW_NONE = 1;
// Show all of date items
private static final int SHOW_ALL = 2;

* The theme of Gesture Search can be light or dark.
* By default, Gesture Search will use a dark theme.
private static final String THEME = "theme";
private static final int THEME_LIGHT = 0;
private static final int THEME_DARK = 1;

/** Keys for results returned by Gesture Search */
private static final String SELECTED_ITEM_ID = "selected_item_id";
private static final String SELECTED_ITEM_NAME = "selected_item_name";
As you can see in the code, when Gesture Search appears, we can show a recently selected country name, or nothing. Gesture Search then appears with a list of all the country names. The user can draw gestures directly on top of the list and a target item will pop up at the top. When a user taps a country name, Gesture Search exits and returns the result to the calling app. The following method is invoked for processing the user selection result, reading the Id and the name of the chosen data item.
protected void onActivityResult(int requestCode, int resultCode, Intent data) {
if (resultCode == Activity.RESULT_OK) {
switch (requestCode) {
long selectedItemId = data.getLongExtra(SELECTED_ITEM_ID, -1);
String selectedItemName = data.getStringExtra(SELECTED_ITEM_NAME);
// Print out the Id and name of the item that is selected
// by the user in Gesture Search
Log.d("GestureSearchExample", selectedItemId + ": " + selectedItemName);
To use the Gesture Search API, you must be sure Gesture Search is installed. To test this condition, catch ActivityNotFoundException as shown in the above code snippet and display a MessageBox asking the user to install Gesture Search.

You can download the sample code at

Yang Li builds interactive systems to make information easily accessible anywhere anytime. He likes watching movies and spending quality time with his family.

Posted by Scott Knaster, Editor

[Gd] Authorship markup and web search

| More

Official Google Webmaster Central Blog: Authorship markup and web search

Webmaster level: Intermediate

Today we're beginning to support authorship markup—a way to connect authors with their content on the web. We're experimenting with using this data to help people find content from great authors in our search results.

We now support markup that enables websites to publicly link within their site from content to author pages. For example, if an author at The New York Times has written dozens of articles, using this markup, the webmaster can connect these articles with a New York Times author page. An author page describes and identifies the author, and can include things like the author’s bio, photo, articles and other links.

If you run a website with authored content, you’ll want to learn about authorship markup in our Help Center. The markup uses existing standards such as HTML5 (rel=”author”) and XFN (rel=”me”) to enable search engines and other web services to identify works by the same author across the web. If you're already doing structured data markup using microdata from, we'll interpret that authorship information as well.

We wanted to make sure the markup was as easy to implement as possible. To that end, we’ve already worked with several sites to markup their pages, including The New York Times, The Washington Post, CNET, Entertainment Weekly, The New Yorker and others. In addition, we’ve taken the extra step to add this markup to everything hosted by YouTube and Blogger. In the future, both platforms will automatically include this markup when you publish content.

We know that great content comes from great authors, and we’re looking closely at ways this markup could help us highlight authors and rank search results.

Posted by Othar Hansson, Software Engineer

[Gd] Pilot Webmaster Tools’ Search Queries data in Google Analytics

| More

Official Google Webmaster Central Blog: Pilot Webmaster Tools’ Search Queries data in Google Analytics

Webmaster Level: All

Webmasters have long been asking for better integration between Google Webmaster Tools and Google Analytics. Today we’re happy to announce a limited pilot for Search Engine Optimization reports in Google Analytics, based on Search Queries data from Webmaster Tools.

In addition to including Search Queries data found in Webmaster Tools, these Search Engine Optimization reports also take advantage of Google Analytics’ advanced filtering and visualization capabilities for deeper data analysis. For example, you can filter for queries that had more than 100 clicks and see a chart for how much each of those queries contributed to your overall clicks from top queries.

To enable these Search Engine Optimization reports, you should sign up for the pilot and you must be both a Webmaster Tools verified site owner and a Google Analytics administrator. Each additional user who would like to view them also needs to individually sign up for the pilot.

Posted by Christina Chen & Torrey Hoffman, Webmaster Tools Team

Tuesday, June 7, 2011

[Gd] +1'ing our API docs

| More

The official Google Code blog: +1'ing our API docs

By Ashleigh Rentz, API Docs Program Manager

"Hey Scott, how do I format this API call so the data comes back as a string instead of an object?"

Sometimes it’s hard to find the right doc at the right time. Lots of web pages mention the terms you’re looking for, but which ones actually have them in the right context? We ask our friends and coworkers these questions because we bet they’ve seen the problem before. We trust their technical judgment and we know they can skip straight to the right answer.

That’s why we’ve just added the +1 button to the top of most API docs:

Whenever you find the key information you need, we hope you’ll +1 that page and let the world know! It’s a simple way to help point the people you code with in the right direction and make RTFM’ing a bit easier for everyone.

Ashleigh Rentz is a Program Manager supporting the team of technical writers who tirelessly document Google’s developer APIs. She can often be seen skating down the halls between meetings.

Posted by Scott Knaster, Editor

[Gd] New Editing Features in Eclipse plug-in for Android

| More

Android Developers Blog: New Editing Features in Eclipse plug-in for Android

At the Google I/O conference a month ago, we demonstrated the next version of the Android Development Tools (ADT) plugin. Today we’re happy to announce that version 11 is done and available for download!

ADT 11 focuses on editor improvements. First, it offers several new visual refactoring operations, such as “Extract Include” and “Extract Style,” which help automatically extract duplicated layout fragments and style attributes into reusable layouts, styles, and themes.

Second, the visual layout editor now supports fragments, palette configurations, and improved support for custom views.

Last, XML editing has been improved with new quick fixes, code completion in more file types and many “go to declaration” enhancements.

ADT 11 packs a long list of new features and enhancements. Please visit our ADT page for more details. For an in-depth demo, check out the video of our Android Development Tools session at Google I/O, below.

Please note that the visual layout editor depends on a layout rendering library that ships with each version of the platform component in the SDK. We are currently working on a number of improvements to this library as well, which we plan to release soon for all platform versions. When we release the updates, some new features in ADT 11 will be “unlocked” - such as support for ListView previewing - so keep an eye on this blog for further announcements.


[Gd] Introducing ViewPropertyAnimator

| More

Android Developers Blog: Introducing ViewPropertyAnimator

[This post is by Chet Haase, an Android engineer who specializes in graphics and animation, and who occasionally posts videos and articles on these topics on his CodeDependent blog at — Tim Bray]

In an earlier article, Animation in Honeycomb, I talked about the new property animation system available as of Android 3.0. This new animation system makes it easy to animate any kind of property on any object, including the new properties added to the View class in 3.0. In the 3.1 release, which was made available recently, we added a small utility class that makes animating these properties even easier.

First, if you’re not familiar with the new View properties such as alpha and translationX, it might help for you to review the section in that earlier article that discusses these properties entitled, rather cleverly, “View properties”. Go ahead and read that now; I’ll wait.

Okay, ready?

Refresher: Using ObjectAnimator

Using the ObjectAnimator class in 3.0, you could animate one of the View properties with a small bit of code. You create the Animator, set any optional properties such as the duration or repetition attributes, and start it. For example, to fade an object called myView out, you would animate the alpha property like this:

    ObjectAnimator.ofFloat(myView, "alpha", 0f).start();

This is obviously not terribly difficult, either to do or to understand. You’re creating and starting an animator with information about the object being animated, the name of the property to be animated, and the value to which it’s animating. Easy stuff.

But it seemed that this could be improved upon. In particular, since the View properties will be very commonly animated, we could make some assumptions and introduce some API that makes animating these properties as simple and readable as possible. At the same time, we wanted to improve some of the performance characteristics of animations on these properties. This last point deserves some explanation, which is what the next paragraph is all about.

There are three aspects of performance that are worth improving about the 3.0 animation model on View properties. One of the elements concerns the mechanism by which we animate properties in a language that has no inherent concept of “properties”. The other performance issues relate to animating multiple properties. When fading out a View, you may only be animating the alpha property. But when a view is being moved on the screen, both the x and y (or translationX and translationY) properties may be animated in parallel. And there may be other situations in which several properties on a view are animated in parallel. There is a certain amount of overhead per property animation that could be combined if we knew that there were several properties being animated.

The Android runtime has no concept of “properties”, so ObjectAnimator uses a technique of turning a String denoting the name of a property into a call to a setter function on the target object. For example, the String “alpha” gets turned into a reference to the setAlpha() method on View. This function is called through either reflection or JNI, mechanisms which work reliably but have some overhead. But for objects and properties that we know, like these properties on View, we should be able to do something better. Given a little API and knowledge about each of the properties being animated, we can simply set the values directly on the object, without the overhead associated with reflection or JNI.

Another piece of overhead is the Animator itself. Although all animations share a single timing mechanism, and thus don’t multiply the overhead of processing timing events, they are separate objects that perform the same tasks for each of their properties. These tasks could be combined if we know ahead of time that we’re running a single animation on several properties. One way to do this in the existing system is to use PropertyValuesHolder. This class allows you to have a single Animator object that animates several properties together and saves on much of the per-Animator overhead. But this approach can lead to more code, complicating what is essentially a simple operation. The new approach allows us to combine several properties under one animation in a much simpler way to write and read.

Finally, each of these properties on View performs several operations to ensure proper invalidation of the object and its parent. For example, translating a View in x invalidates the position that it used to occupy and the position that it now occupies, to ensure that its parent redraws the areas appropriately. Similarly, translating in y invalidates the before and after positions of the view. If these properties are both being animated in parallel, there is duplication of effort since these invalidations could be combined if we had knowledge of the multiple properties being animated. ViewPropertyAnimator takes care of this.

Introducing: ViewPropertyAnimator

ViewPropertyAnimator provides a simple way to animate several properties in parallel, using a single Animator internally. And as it calculates animated values for the properties, it sets them directly on the target View and invalidates that object appropriately, in a much more efficient way than a normal ObjectAnimator could.

Enough chatter: let’s see some code. For the fading-out view example we saw before, you would do the following with ViewPropertyAnimator:


Nice. It’s short and it’s very readable. And it’s also easy to combine with other property animations. For example, we could move our view in x and y to (500, 500) as follows:


There are a couple of things worth noting about these commands:

  • animate(): The magic of the system begins with a call to the new method animate() on the View object. This returns an instance of ViewPropertyAnimator, on which other methods are called which set the animation properties.

  • Auto-start: Note that we didn’t actually start() the animations. In this new API, starting the animations is implicit. As soon as you’re done declaring them, they will all begin. Together. One subtle detail here is that they will actually wait until the next update from the UI toolkit event queue to start; this is the mechanism by which ViewPropertyAnimator collects all declared animations together. As long as you keep declaring animations, it will keep adding them to the list of animations to start on the next frame. As soon as you finish and then relinquish control of the UI thread, the event queue mechanism kicks in and the animations begin.

  • Fluent: ViewPropertyAnimator has a Fluent interface, which allows you to chain method calls together in a very natural way and issue a multi-property animation command as a single line of code. So all of the calls such as x() and y() return the ViewPropertyAnimator instance, on which you can chain other method calls.

You can see from this example that the code is much simpler and more readable. But where do the performance improvements of ViewPropertyAnimator come in?

Performance Anxiety

One of the performance wins of this new approach exists even in this simple example of animating the alpha property. ViewPropertyAnimator uses no reflection or JNI techniques; for example, the alpha() method in the example operates directly on the underlying "alpha" field of a View, once per animation frame.

The other performance wins of ViewPropertyAnimator come in the ability to combine multiple animations. Let’s take a look at another example for this.

When you move a view on the screen, you might animate both the x and y position of the object. For example, this animation moves myView to x/y values of 50 and 100:

    ObjectAnimator animX = ObjectAnimator.ofFloat(myView, "x", 50f);
ObjectAnimator animY = ObjectAnimator.ofFloat(myView, "y", 100f);
AnimatorSet animSetXY = new AnimatorSet();
animSetXY.playTogether(animX, animY);

This code creates two separate animations and plays them together in an AnimatorSet. This means that there is the processing overhead of setting up the AnimatorSet and running two Animators in parallel to animate these x/y properties. There is an alternative approach using PropertyValuesHolder that you can use to combine multiple properties inside of one single Animator:

    PropertyValuesHolder pvhX = PropertyValuesHolder.ofFloat("x", 50f);
PropertyValuesHolder pvhX = PropertyValuesHolder.ofFloat("y", 50f);
ObjectAnimator.ofPropertyValuesHolder(myView, pvhX, pvyY).start();

This approach avoids the multiple-Animator overhead, and is the right way to do this prior to ViewPropertyAnimator. And the code isn’t too bad. But using ViewPropertyAnimator, it all gets easier:


The code, once again, is simpler and more readable. And it has the same single-Animator advantage of the PropertyValuesHolder approach above, since ViewPropertyAnimator runs one single Animator internally to animate all of the properties specified.

But there’s one other benefit of the ViewPropertyAnimator example above that’s not apparent from the code: it saves effort internally as it sets each of these properties. Normally, when the setX() and setY() functions are called on View, there is a certain amount of calculation and invalidation that occurs to ensure that the view hierarchy will redraw the correct region affected by the view that moved. ViewPropertyAnimator performs this calculation once per animation frame, instead of once per property. It sets the underlying x/y properties of View directly and performs the invalidation calculations once for x/y (and any other properties being animated) together, avoiding the per-property overhead necessitated by the ObjectAnimator property approach.

An Example

I finished this article, looked at it ... and was bored. Because, frankly, talking about visual effects really begs having some things to look at. The tricky thing is that screenshots don’t really work when you’re talking about animation. (“In this image, you see that the button is moving. Well, not actually moving, but it was when I captured the screenshot. Really.”) So I captured a video of a small demo application that I wrote, and will through the code for the demo here.

Here’s the video. Be sure to turn on your speakers before you start it. The audio is really the best part.

In the video, the buttons on the upper left (“Fade In”, “Fade Out”, etc.) are clicked one after the other, and you can see the effect that those button clicks have on the button at the bottom (“Animating Button”). All of those animations happen thanks to the ViewPropertyAnimator API (of course). I’ll walk through the code for each of the individual animations below.

When the activity first starts, the animations are set up to use a longer duration than the default. This is because I wanted the animations to last long enough in the video for you to see. Changing the default duration for the animatingButton object is a one-line operation to retrieve the ViewPropertyAnimator for the button and set its duration:


The rest of the code is just a series of OnClickListenerobjects set up on each of the buttons to trigger its specific animation. I’ll put the complete listener in for the first animation below, but for the rest of them I’ll just put the inner code instead of the listener boilerplate.

The first animation in the video happens when the Fade Out button is clicked, which causes Animating Button to (you guessed it) fade out. Here’s the listener for the fadeOut button which performs this action:

    fadeOut.setOnClickListener(new View.OnClickListener() {
public void onClick(View v) {

You can see, in this code, that we simply tell the object to animate to an alpha of 0. It starts from whatever the current alpha value is.

The next button performs a Fade In action, returning the button to an alpha value of 1 (fully opaque):


The Move Over and Move Back buttons perform animations on two properties in parallel: x and y. This is done by chaining calls to those property methods in the animator call. For the Move Over button, we have the following:

    int xValue = container.getWidth() - animatingButton.getWidth();
int yValue = container.getHeight() - animatingButton.getHeight();

And for the Move Back case (where we just want to return the button to its original place at (0, 0) in its container), we have this code:


One nuance to notice from the video is that, after the Move Over and Move Back animations were run, I then ran them again, clicking the Move Back animation while the Move Over animation was still executing. The second animation on the same properties (x and y) caused the first animation to cancel and the second animation to start from that point. This is an intentional part of the functionality of ViewPropertyAnimator. It takes your command to animate a property and, if necessary, cancels any ongoing animation on that property before starting the new animation.

Finally, we have the 3D rotation effect, where the button spins twice around the Y (vertical) axis. This is obviously a more complicated action and takes a great deal more code than the other animations (or not):


One important thing to notice in the rotation animations in the video is that they happen in parallel with part of the Move animations. That is, I clicked on the Move Over button, then the Rotate button. This caused the movement to stat, and then the Rotation to start while it was moving. Since each animation lasted for two seconds, the rotation animation finished after the movement animation was completed. Same thing on the return trip - the button was still spinning after it settled into place at (0, 0). This shows how independent animations (animations that are not grouped together on the animator at the same time) create a completely separate ObjectAnimator internally, allowing the animations to happen independently and in parallel.

Play with the demo some more, check out the code, and groove to the awesome soundtrack for 16.75. And if you want the code for this incredibly complex application (which really is nothing more than five OnClick listeners wrapping the animator code above), you can download it from here.

And so...

For the complete story on ViewPropertyAnimator, you might want to see the SDK documentation. First, there’s the animate() method in View. Second, there’s the ViewPropertyAnimator class itself. I’ve covered the basic functionality of that class in this article, but there are a few more methods in there, mostly around the various properties of View that it animates. Thirdly, there’s ... no, that’s it. Just the method in View and the ViewPropertyAnimator class itself.

ViewPropertyAnimator is not meant to be a replacement for the property animation APIs added in 3.0. Heck, we just added them! In fact, the animation capabilities added in 3.0 provide important plumbing for ViewPropertyAnimator as well as other animation capabilities in the system overall. And the capabilities of ObjectAnimator provide a very flexible and easy to use facility for animating, well, just about anything! But if you want to easily animate one of the standard properties on View and the more limited capabilities of the ViewPropertyAnimator API suit your needs, then it is worth considering.

Note: I don’t want to get you too worried about the overhead of ObjectAnimator; the overhead of reflection, JNI, or any of the rest of the animator process is quite small compared to what else is going on in your program. it’s just that the efficiencies of ViewPropertyAnimator offer some advantages when you are doing lots of View property animation in particular. But to me, the best part about the new API is the code that you write. It’s the best kind of API: concise and readable. Hopefully you agree and will start using ViewPropertyAnimator for your view property animation needs.


Monday, June 6, 2011

[Gd] Dev Channel Update

| More

Google Chrome Releases: Dev Channel Update

The Chrome Dev channel has been updated to 13.0.782.10 for Windows, Mac and Linux.  This release contains an updated version of Adobe Flash.  Interested in switching to the Dev channel?  Find out how.  If you find a new issue, please let us know by filing a bug.

Anthony LaForge
Google Chrome

[Gd] Beta Channel Update

| More

Google Chrome Releases: Beta Channel Update

The Chrome Beta channel has been updated to 12.0.742.82 for all platforms.  This release contains an updated version of Adobe Flash.  Interested in switching to the Beta channel?  Find out how.  If you find a new issue, please let us know by filing a bug.

Jason Kersey
Google Chrome

[Gd] Stable Channel Update

| More

Google Chrome Releases: Stable Channel Update

The Chrome Stable channel has been updated to 11.0.696.77 for all platforms.  This release contains an updated version of Adobe Flash.  Interested in switching to the Stable channel?  Find out how.  If you find a new issue, please let us know by filing a bug.

Karen Grunberg

Google Chrome

[Gd] Dev Channel Update

| More

Google Chrome Releases: Dev Channel Update

The Dev channel has been updated to 13.0.782.4 on Mac to fix a start-up crash regression.

Anthony Laforge
Google Chrome

[Gd] Chrome Beta Release

| More

Google Chrome Releases: Chrome Beta Release

The Chrome Beta channel has been updated to 12.0.742.77 for all platforms.  This release contains a small number of UI updates and performance fixes.  The full list of changes is available in the SVN revision log.  Interested in switching to the Beta channel?  Find out how.  If you find a new issue, please let us know by filing a bug.

Jason Kersey
Google Chrome

[Gd] Chrome OS Beta Channel Update

| More

Google Chrome Releases: Chrome OS Beta Channel Update

The Chrome OS Beta channel has been updated to R12 release 0.12.433.90 including Chrome 12.0.742.75 Beta. The release includes bug fixes.
  • File Manager fixes
  • Change network connecting animation
  • Crash and stability fixes
  • Update Flash plugin to
No new known issues.

You can find full list of fixes that are in Chrome OS R12 in the chromium-os bug tracker . If you find new issues, please let us know by visiting our help site or filing a bug. You can submit feedback using ‘Report an issue’ under the wrench menu.

Josafat Garcia
Google Chrome