Saturday, February 26, 2011

[Gd] Announcing the next AdWords API Workshops in March

| More

AdWords API Blog: Announcing the next AdWords API Workshops in March


You may remember a series of AdWords API Workshops the Google Developer Relations team have hosted in the past. These workshops focused on new and upcoming features in the AdWords API and provided detailed code walkthroughs on how to use these new features (see videos).

We invite you to join us for our next AdWords API Workshops which we plan to host in the following cities:
  • London, March 7th
  • Amsterdam, March 9th
  • San Francisco, March 9th
  • Hamburg, March 11th
  • New York City, March 15th

In addition to having our team on-hand to answer all your API-related questions, we will also be presenting technical deep-dives on the following topics:
  • The new Report Service - including cross-client reporting
  • Generic Selectors - a new and more efficient selection mechanism
  • Remarketing - now fully accessible through the API
  • AdWords Campaign Experiments - now with full access

All events will have the same agenda, and will run from approximately 10:00AM - 3:00PM (local time). These workshops are geared towards software engineers and developers who are already somewhat familiar with the AdWords API.

For more information and to register, visit: http://sites.google.com/site/awapiworkshops/home.

-- Sumit Chandel, AdWords API Team
URL: http://adwordsapi.blogspot.com/2011/02/announcing-next-adwords-api-workshops.html

[Gd] The Data Viz Challenge: can you make tax data exciting?

| More

The official Google Code blog: The Data Viz Challenge: can you make tax data exciting?

(Cross-posted from The Official Google Blog)

This time of year, everyone in the United States is starting to fill out—with varying levels of enthusiasm—our federal income tax forms. Yet, after we write our checks to the IRS, most of us don’t really know exactly where our money is going.

Fortunately, there’s a new online tool to help us find out. Last year, Andrew Johnson and Louis Garcia, two developers from Minneapolis, Minn., created a website called whatwepayfor.com that uses public data to estimate how our tax money is spent. You enter your income and filing status on the site, and it creates a formatted table of numbers showing your contributions to the federal budget—down to the penny:

We’re impressed by what the website uncovers. In 2010, for example, a married couple making $40,000 a year contributed approximately $14.07 to space operations, $6.83 to aviation security and $0.91 to the Peace Corps…and those are just a few of the hundreds of expenditures revealed on the site. As we spent time exploring all of these details, it got us thinking: how we could make the information even more accessible? So we created a simple interactive data visualization:

Click the image above to try the interactive version—it lets you drag the bubbles around, change the income level and so on. You can now look at the data in a new way, and it’s a little more fun to explore. Of course, there are lots of ways to visualize the data, and we’re very sure there are many talented designers and developers around the country who can do it even better than we have.

To make that happen, we’ve teamed up with Eyebeam, a not-for-profit art and technology center, to host what we’re calling the Data Viz Challenge. Andrew and Louis have built an API to let anyone access the data, so now you can choose how to display it. Could you create a better animated chart? Something in 3D? An interactive website? A physical display somewhere in the real world? We want you to show everyone how data visualization can be a powerful tool for turning information into understanding.

You can enter the challenge at datavizchallenge.org, where you’ll also find more information about challenge and the data. The challenge starts today and ends March 27, 2011, and is open to the U.S. only. The top visualization, as chosen by a jury, will receive a $5,000 award and a shout-out on the site and this blog. We’ll announce the shortlist on the week of April 11, and the winners on April 18, a.k.a. Tax Day.

If you’re a data viz enthusiast, we hope you’ll take a look at the data and build your own creative visualization. But even if you’re not, hopefully the results will help you appreciate what data visualization can do, and its usefulness in turning raw information—like federal income tax numbers—into something you can explore and understand.


By Valdean Klump, Google Creative Lab
URL: http://googlecode.blogspot.com/2011/02/data-viz-challenge-can-you-make-tax.html

[Gd] Chrome Developer Tools: Back to Basics

| More

Chromium Blog: Chrome Developer Tools: Back to Basics

It’s been an exciting past few months in the Google Chrome Developer Tools world as we keep adding new features, while polishing up existing ones to respond to your feedback.

One of the areas we have focused a lot of our energy is on network instrumentation. Recently we’ve made many improvements that will hopefully improve your experience when using Chrome Developer Tools. These improvements include:

  • Network aspects of your web page are now inspected in the Network panel. This gives you access to even more information at a single glance. You can sort and clear data, preserve log information upon navigation and even export network data into HAR format.
  • All the timing information about your resource loads now comes from the network stack, not WebKit, so timing information now adequately represents raw network timing. You can see detailed timing for different phases of the loading by hovering over the log entry.

  • We now push raw HTTP headers and status messages into Chrome Developer Tools. As a result, you now see precisely what the browser received from the server and not just how the rendering engine interpreted that information.
  • Similarly to the old Resources panel, you can see syntax-highlighted resource contents.

We’ve also made CSS editing a whole lot easier. In particular, you’ll now find separate fields for property names and values instead of a single field for both. As you type, you will see suggestions of available keywords for property values.


But that’s only the tip of the iceberg. Similar to the changes in the network panel, the CSS sidebar now shows the raw information that the browser gets from the server - not the rendering engine’s interpretation of the information. As a result, you can use Chrome Developer Tools to see CSS properties that are not recognized by WebKit (e.g., engine-specific or simply erroneous properties). This finally puts an end to the nightmare of disappearing invalid properties.


For a more complete reference on working with the Chrome Developer Tools, check out our new home page. The CSS improvements that we implemented upstream in WebKit are further described in our WebKit blog post. And for even more tips on how to use Chrome Developer Tools, watch the new video below.



Posted by Pavel Feldman and Alexander Pavlov, Software Engineers
URL: http://blog.chromium.org/2011/02/chrome-developer-tools-back-to-basics.html

Friday, February 25, 2011

[Gd] Beta Channel Update

| More

Google Chrome Releases: Beta Channel Update

The Beta channel has been updated to 10.0.648.119 for Windows.  This release contains stability improvements over the previous release.  Full details about the Chrome changes are available in the SVN revision log. If you find new issues, please let us know by filing a bug. Want to change to another Chrome release channel? Find out how.

Jason Kersey
Google Chrome
URL: http://googlechromereleases.blogspot.com/2011/02/beta-channel-update_24.html

[Gd] [Libraries][Update] jQuery 1.5.1

| More

Google AJAX API Alerts: [Libraries][Update] jQuery 1.5.1

jQuery has been updated to 1.5.1
URL: http://ajax-api-alerts.blogspot.com/2011/02/librariesupdate-jquery-151.html

[Gd] Pushing Updates with the Channel API

| More

Google App Engine Blog: Pushing Updates with the Channel API

If you've been watching Best Buy closely, you already know that Best Buy is constantly trying to come up with new and creative ways to use App Engine to engage with their customers. In this guest blog post, Luke Francl, BBYOpen Developer, was kind enough to share with us Best Buy's latest App Engine project.







As part of Best Buy's Connected Store initiative, we have placed QR codes on our product information Fact Tags, in addition to the standard pricing and product descriptions already printed there. When a customer uses the Best Buy app, or any other QR code scanner, they are shown the product details for the product they have scanned, powered by the BBYOpen API or the m.bestbuy.com platform.



To track what stores and products are most popular, QR codes are also encoded with the store number. My project at Best Buy has been to analyze these scans and make new landing pages for QR codes easier to create.



Since we have the geo-location of the stores and product details from our API, it is a natural fit to display these scans on a map. We implemented an initial version of this idea, which used polling to visualize recent scans. To take our this a step further, we thought it would be exciting to use the recently launched App Engine Channel API to update our map in real-time.








Our biggest challenge was pushing the updates to multiple browsers, since we'd most certainly have more than one user at a time looking at our map. The Channel API does not currently support broadcasting a single update to many connected clients. In order to broadcast updates to multiple users, our solution was to keep a list of client IDs and send an update message to each of them.



To implement this, we decided to store the list of active channels in memcache. This solution is not ideal as there are race conditions when we modify the list of client IDs. However, it works well for our demo.



Here’s how we got it working. The code has been slightly simplified for clarity, including removing the rate limiting that we do. To play with a working demo, check out the channel-map-demo project from GitHub.



As customers in our stores scan QR codes, those scans are recorded by enqueuing a deferred. We defer all writes so we can return a response to the client as quickly as possible.



In the deferred, we call a function to push the message to all the active channels (see full source).




def push_to_channels(scan):
content = '<div class="infowindowcontent">(...) </div>' % {
'product_name': scan.product.name,
'timestamp' : scan.timestamp.strftime('%I:%M %p'),
'store_name': scan.store.name,
'state': scan.store.state,
'image': scan.product.image }

message = {'lat': scan.store.lat,
'lon': scan.store.lon,
'content': content}

channels = simplejson.loads(memcache.get('channels') or '{}')

for channel_id in channels.iterkeys():
encoded_message = simplejson.dumps(message)

channel.send_message(channel_id, encoded_message)


The message is a JSON data structure containing the latitude and longitude of the store where the scan occurred, plus a snippet of HTML to display in an InfoWindow on the map. The product information (such as name and thumbnail image) comes from our BBYOpen Products API.



Then, when a user opens up the site and requests the map page, we create a channel, add it to the serialized channels Python dictionary, stored in memcache, and pass the token back to the client (see full source).




channel_id = uuid.uuid4().hex
token = channel.create_channel(channel_id)

channels = simplejson.loads(memcache.get('channels') or '{}')

channels[channel_id] = str(datetime.now())

memcache.set('channels', simplejson.dumps(channels))


On the map page, JavaScript creates a Google Maps map and uses the token to open a channel. When the onMessage callback is called by the Channel API, a new InfoWindow is displayed on the map using the HTML content and latitude and longitude in the message (see full source).




function onMessage(message) {
var scan = JSON.parse(message.data);

var infoWindow = new google.maps.InfoWindow(
{content: scan.content,
disableAutoPan: true,
position: new google.maps.LatLng(scan.lat, scan.lon)});

infoWindow.open(map);
setTimeout(function() { infoWindow.close(); }, 10000);
};


Finally, since channels can only be open for two hours, we have a cron job that runs once an hour to remove old channels. Before deleting the client ID, a message is sent on the channel which triggers code in the JavaScript onMessage function to reload the page, thus giving it a new channel and client ID (see full source).



You can see the end result on our map, watch a video about the BBYScan project or checkout the sample channel-map-demo project and create your own Channel API based application.



Posted by Fred Sauer, App Engine Team
URL: http://googleappengine.blogspot.com/2011/02/pushing-updates-with-channel-api.html

Thursday, February 24, 2011

[Gd] Introducing Recipe View, based on rich snippets markup

| More

Official Google Webmaster Central Blog: Introducing Recipe View, based on rich snippets markup

Webmaster level: All

Today, we’re happy to introduce Recipe View, a new way of finding recipes when searching on Google. Recipe View enables you to filter your regular web search results to show only recipes and to restrict results based on ingredients, cook time, or calorie preferences:

Read more about Recipe View on the Official Google Blog and be sure to check out our video of Google Chef Scott Giambastiani demonstrating how he uses Recipe View to find great recipes for Googlers:



Recipe View is based on data from recipe rich snippets markup. As a webmaster, to make sure your recipe content can show in Recipe View (currently rolling out in the US and Japan) as well as in regular search results with rich snippets (available globally), be sure to add structured data markup to your recipe pages. Rich snippets are also available for reviews, people, products, and events, and we’ll continue to expand this list of categories over time. You can always see the full list of supported types by referring to our rich snippets documentation and by watching for further updates here on the Webmaster Central Blog.

This marks an exciting milestone for us -- it’s the first time we’ve introduced search filters based on rich snippets markup from webmasters. Over time, we’ll continue exploring new ways to enhance the search experience using this data.

Posted by Toshiaki Fujiki, Google Search team
URL: http://googlewebmastercentral.blogspot.com/2011/02/introducing-recipe-view-based-on-rich.html

[Gd] Making Websites Mobile Friendly

| More

Official Google Webmaster Central Blog: Making Websites Mobile Friendly

Webmaster level: Intermediate

We’ve noticed a rise in the number of questions from webmasters about how best to structure a website for mobile phones and how websites can best interact with Googlebot-Mobile. In this post we’ll explain the current situation and give you specific recommendations you can implement now.

Some Background

Let’s start with a simple question: what do we mean by “mobile phone” when talking about mobile-friendly websites?

A good way to answer this question is to think about the capabilities of the mobile phone’s web browser, especially in relation to the capabilities of modern desktop browsers. To simplify matters, we can break mobile phones into a few classifications:

  1. Traditional mobile phones: Phones with browsers that cannot render normal desktop webpages. This includes browsers for cHTML (iMode), WML, WAP, and the like.
  2. Smartphones: Phones with browsers that render normal desktop pages, at least to some extent. This category includes a diversity of devices, such Windows Phone 7, Blackberry devices, iPhones, and Android phones, and also tablets and eBook readers.

    We can further break down this category by support for HTML5:

    • Devices with browsers that do not support HTML5
    • Devices with browsers that support HTML5

Once upon a time, mobile phones connected to the Internet using browsers with limited rendering capabilities; but this is clearly a changing situation with the fast rise of smartphones which have browsers that rival the full desktop experience. As such, it’s important to note that the distinction we are making here is based on the current situation as we see it and might change in the future.

Googlebot and Mobile Content

Google has two crawlers relevant to this topic: Googlebot and Googlebot-Mobile. Googlebot crawls desktop-browser type of webpages and content embedded in them and Googlebot-Mobile crawls mobile content. The questions we’re seeing more of can be summed up as follows:

Given the diversity of capabilities of mobile web browsers, what kind of content should I serve to Googlebot-Mobile?

The answer lies in the User-agent that Googlebot-Mobile supplies when crawling. There are several User-agent strings in use by Googlebot-Mobile, all of which use this format:

[Phone name(s)] (compatible; Googlebot-Mobile/2.1; +http://www.google.com/bot.html)

To decide which content to serve, assess which content your website has that best serves the phone(s) in the User-agent string. A full list of Googlebot-Mobile User-agents can be found here.

Notice that we currently do not crawl with Googlebot-Mobile using a smartphone User-agent string. Thus at the current time, a correctly-configured content serving system will serve Googlebot-Mobile content only for the traditional phones described above, because that’s what the User-agent strings in use today dictate. This may change in the future, and if so, it may mean there would be a new Googlebot-Mobile User-agent string.

For now, we expect smartphones to handle desktop experience content so there is no real need for mobile-specific effort from webmasters. However, for many websites it may still make sense for the content to be formatted differently for smartphones, and the decision to do so should be based on how you can best serve your users.

URL Structure for Mobile Content

The next set of questions ask about the URLs mobile content should be served from. Let’s look in detail at some common use cases.

Websites with only Desktop Experience Content

Most websites currently have only one version of their content, namely in HTML that is designed for desktop web browsers. This means all browsers access the content from the same URL.

These websites may not be serving traditional mobile phone users. The quality experienced by their smartphone users depends on the mobile browser they are using and it could be as good as browsing from the desktop.

If you serve only desktop experience content for all User Agents, you should do so for Googlebot-Mobile too; that is, treat Googlebot-Mobile as you treat all other or unknown User Agents. In these cases, Google may modify your webpages for an improved mobile experience.

Websites with Dedicated Mobile Content

Many websites have content specifically optimized for mobile users. The content could be simply reformatted for the typically smaller mobile displays, or it could be in a different format (e.g., served using WAP, etc.).

A very common question we see is: Does it matter if the different types of content are served from the same URL or from different URLs? For example, some websites have www.example.com as the URL desktop browsers are meant to access and have m.example.com or wap.example.com for the different mobile devices. Other websites serve all types of content from just one URL structure like www.example.com.

For Googlebot and Googlebot-Mobile, it does not matter what the URL structure is as long as it returns exactly what a user sees too. For example, if you redirect mobile users from www.example.com to m.example.com, that will be recognized by Googlebot-Mobile and both websites will be crawled and added to the correct index. In this case, use a 301 redirect for both users and Googlebot-Mobile.

If you serve all types of content from www.example.com, i.e. serving desktop-optimized content or mobile-optimized content from the same URL depending on the User-agent, this will also lead to correct crawling by Googlebot and Googlebot-Mobile. This is not considered cloaking by Google.

It is worth repeating that regardless of URL structure, you must correctly detect the User-agent as given by your users and Googlebot-Mobile, and serve both the same content. Don’t forget to keep the default content, the desktop-optimized content, for when an unknown User-agent requests it.

Mobile Sitemaps in Webmaster Tools

Finally, we receive many questions about what URLs to put in Mobile Sitemaps. As explained in our Mobile Sitemaps Help Center articles, you should include only mobile content URLs in Mobile Sitemaps, even if these URLs also return non-mobile content when accessed by a non-mobile User-agent.

More Questions?

A good place to start is our Mobile Sites Help Center articles and the relevant sections in our Search Engine Optimization Starter Guide. We also created a thread in our forums for you to ask questions about this post.

Posted by Pierre Far, Webmaster Trends Analyst

URL: http://googlewebmastercentral.blogspot.com/2011/02/making-websites-mobile-friendly.html

[Gd] Animation in Honeycomb

| More

Android Developers Blog: Animation in Honeycomb


[This post is by Chet Haase, an Android engineer who specializes in graphics and animation, and who occasionally posts videos and articles on these topics on his CodeDependent blog at graphics-geek.blogspot.com. — Tim Bray]

One of the new features ushered in with the Honeycomb release is a new animation system, a set of APIs in a whole new package (android.animation) that makes animating objects and properties much easier than it was before.

"But wait!" you blurt out, nearly projecting a mouthful of coffee onto your keyboard while reading this article, "Isn't there already an animation system in Android?"

Animation Prior to Honeycomb

Indeed, Android already has animation capabilities: there are several classes and lots of great functionality in the android.view.animation package. For example, you can move, scale, rotate, and fade Views and combine multiple animations together in an AnimationSet object to coordinate them. You can specify animations in a LayoutAnimationController to get automatically staggered animation start times as a container lays out its child views. And you can use one of the many Interpolator implementations like AccelerateInterpolator and Bounce to get natural, nonlinear timing behavior.

But there are a couple of major pieces of functionality lacking in the previous system.

For one thing, you can animate Views... and that's it. To a great extent, that's okay. The GUI objects in Android are, after all, Views. So as long as you want to move a Button, or a TextView, or a LinearLayout, or any other GUI object, the animations have you covered. But what if you have some custom drawing in your view that you'd like to animate, like the position of a Drawable, or the translucency of its background color? Then you're on your own, because the previous animation system only understands how to manipulate View objects.

The previous animations also have a limited scope: you can move, rotate, scale, and fade a View... and that's it. What about animating the background color of a View? Again, you're on your own, because the previous animations had a hard-coded set of things they were able to do, and you could not make them do anything else.

Finally, the previous animations changed the visual appearance of the target objects... but they didn't actually change the objects themselves. You may have run into this problem. Let's say you want to move a Button from one side of the screen to the other. You can use a TranslateAnimation to do so, and the button will happily glide along to the other side of the screen. And when the animation is done, it will gladly snap back into its original location. So you find the setFillAfter(true) method on Animation and try it again. This time the button stays in place at the location to which it was animated. And you can verify that by clicking on it - Hey! How come the button isn't clicking? The problem is that the animation changes where the button is drawn, but not where the button physically exists within the container. If you want to click on the button, you'll have to click the location that it used to live in. Or, as a more effective solution (and one just a tad more useful to your users), you'll have to write your code to actually change the location of the button in the layout when the animation finishes.

It is for these reasons, among others, that we decided to offer a new animation system in Honeycomb, one built on the idea of "property animation."

Property Animation in Honeycomb

The new animation system in Honeycomb is not specific to Views, is not limited to specific properties on objects, and is not just a visual animation system. Instead, it is a system that is all about animating values over time, and assigning those values to target objects and properties - any target objects and properties. So you can move a View or fade it in. And you can move a Drawable inside a View. And you can animate the background color of a Drawable. In fact, you can animate the values of any data structure; you just tell the animation system how long to run for, how to evaluate between values of a custom type, and what values to animate between, and the system handles the details of calculating the animated values and setting them on the target object.

Since the system is actually changing properties on target objects, the objects themselves are changed, not simply their appearance. So that button you move is actually moved, not just drawn in a different place. You can even click it in its animated location. Go ahead and click it; I dare you.

I'll walk briefly through some of the main classes at work in the new system, showing some sample code when appropriate. But for a more detailed view of how things work, check out the API Demos in the SDK for the new animations. There are many small applications written for the new Animations category (at the top of the list of demos in the application, right before the word App. I like working on animation because it usually comes first in the alphabet).

In fact, here's a quick video showing some of the animation code at work. The video starts off on the home screen of the device, where you can see some of the animation system at work in the transitions between screens. Then the video shows a sampling of some of the API Demos applications, to show the various kinds of things that the new animation system can do. This video was taken straight from the screen of a Honeycomb device, so this is what you should see on your system, once you install API Demos from the SDK.

Animator

Animator is the superclass of the new animation classes, and has some of the common attributes and functionality of the subclasses. The subclasses are ValueAnimator, which is the core timing engine of the system and which we'll see in the next section, and AnimatorSet, which is used to choreograph multiple animators together into a single animation. You do not use Animator directly, but some of the methods and properties of the subclasses are exposed at this superclass level, like the duration, startDelay and listener functionality.

The listeners tend to be important, because sometimes you want to key some action off of the end of an animation, such as removing a view after an animation fading it out is done. To listen for animator lifecycle events, implement the AnimatorListener interface and add your listener to the Animator in question. For example, to perform an action when the animator ends, you could do this:

    anim.addListener(new Animator.AnimatorListener() {
public void onAnimationStart(Animator animation) {}
public void onAnimationEnd(Animator animation) {
// do something when the animation is done
}
public void onAnimationCancel(Animator animation) {}
public void onAnimationRepeat(Animator animation) {}
});

As a convenience, there is an adapter class, AnimatorListenerAdapter, that stubs out these methods so that you only need to override the one(s) that you care about:


anim.addListener(new AnimatorListenerAdapter() {
public void onAnimationEnd(Animator animation) {
// do something when the animation is done
}
});

ValueAnimator

ValueAnimator is the main workhorse of the entire system. It runs the internal timing loop that causes all of a process's animations to calculate and set values and has all of the core functionality that allows it to do this, including the timing details of each animation, information about whether an animation repeats, listeners that receive update events, and the capability of evaluating different types of values (see TypeEvaluator for more on this). There are two pieces to animating properties: calculating the animated values and setting those values on the object and property in question. ValueAnimator takes care of the first part; calculating the values. The ObjectAnimator class, which we'll see next, is responsible for setting those values on target objects.

Most of the time, you will want to use ObjectAnimator, because it makes the whole process of animating values on target objects much easier. But sometimes you may want to use ValueAnimator directly. For example, the object you want to animate may not expose setter functions necessary for the property animation system to work. Or perhaps you want to run a single animation and set several properties from that one animated value. Or maybe you just want a simple timing mechanism. Whatever the case, using ValueAnimator is easy; you just set it up with the animation properties and values that you want and start it. For example, to animate values between 0 and 1 over a half-second, you could do this:

    ValueAnimator anim = ValueAnimator.ofFloat(0f, 1f);
anim.setDuration(500);
anim.start();

But animations are a bit like the tree in the forest philosophy question ("If a tree falls in the forest and nobody is there to hear it, does it make a sound?"). If you don't actually do anything with the values, does the animation run? Unlike the tree question, this one has an answer: of course it runs. But if you're not doing anything with the values, it might as well not be running. If you started it, chances are you want to do something with the values that it calculates along the way. So you add a listener to it, to listen for updates at each frame. And when you get the callback, you call getAnimatedValue(), which returns an Object, to find out what the current value is.

    anim.addUpdateListener(new ValueAnimator.AnimatorUpdateListener() {
public void onAnimationUpdate(ValueAnimator animation) {
Float value = (Float) animation.getAnimatedValue();
// do something with value...
}
});

Of course, you don't necessarily always want to animate float values. Maybe you need to animate something that's an integer instead:

    ValueAnimator anim = ValueAnimator.ofInt(0, 100);

or in XML:

    <animator xmlns:android="http://schemas.android.com/apk/res/android"
android:valueFrom="0"
android:valueTo="100"
android:valueType="intType"/>

In fact, maybe you need to animate something entirely different, like a Point, or a Rect, or some custom data structure of your own. The only types that the animation system understands by default are float and int, but that doesn't mean that you're stuck with those two types. You can to use the Object version of the factory method, along with a TypeEvaluator (explained later), to tell the system how to calculate animated values for this unknown type:

    Point p0 = new Point(0, 0);
Point p1 = new Point(100, 200);
ValueAnimator anim = ValueAnimator.ofObject(pointEvaluator, p0, p1);

There are other animation attributes that you can set on a ValueAnimator besides duration, including:

  • setStartDelay(long): This property controls how long the animation waits after a call to start() before it starts playing.
  • setRepeatCount(int) and setRepeatMode(int): These functions control how many times the animation repeats and whether it repeats in a loop or reverses direction each time.
  • setInterpolator(TimeInterpolator): This object controls the timing behavior of the animation. By default, animations accelerate into and decelerate out of the motion, but you can change that behavior by setting a different interpolator. This function acts just like the one of the same name in the previous Animation class; it's just that the type of the parameter (TimeInterpolator) is different from that of the previous version (Interpolator). But the TimeInterpolator interface is just a super-interface of the existing Interpolator interface in the android.view.animation package, so you can use any of the existing Interpolator implementations, like Bounce, as arguments to this function on ValueAnimator.

ObjectAnimator

ObjectAnimator is probably the main class that you will use in the new animation system. You use it to construct animations with the timing and values that ValueAnimator takes, and also give it a target object and property name to animate. It then quietly animates the value and sets those animated values on the specified object/property. For example, to fade out some object myObject, we could animate the alpha property like this:

    ObjectAnimator.ofFloat(myObject, "alpha", 0f).start();

Note, in this example, a special feature that you can use to make your animations more succinct; you can tell it the value to animate to, and it will use the current value of the property as the starting value. In this case, the animation will start from whatever value alpha has now and will end up at 0.

You could create the same thing in an XML resource as follows:

    <objectAnimator xmlns:android="http://schemas.android.com/apk/res/android"
android:valueTo="0"
android:propertyName="alpha"/>

Note, in the XML version, that you cannot set the target object; this must be done in code after the resource is loaded:

    ObjectAnimator anim = AnimatorInflator.loadAnimator(context, resID);
anim.setTarget(myObject);
anim.start();

There is a hidden assumption here about properties and getter/setter functions that you have to understand before using ObjectAnimator: you must have a public "set" function on your object that corresponds to the property name and takes the appropriate type. Also, if you use only one value, as in the example above, your are asking the animation system to derive the starting value from the object, so you must also have a public "get" function which returns the appropriate type. For example, the class of myObject in the code above must have these two public functions in order for the animation to succeed:

    public void setAlpha(float value);
public float getAlpha();

So by passing in a target object of some type and the name of some property foo supposedly on that object, you are implicitly declaring a contract that that object has at least a setFoo() function and possibly also a getFoo() function, both of which handle the type used in the animation declaration. If all of this is true, then the animation will be able to find those setter/getter functions on the object and set values during the animation. If the functions do not exist, then the animation will fail at runtime, since it will be unable to locate the functions it needs. (Note to users of ProGuard, or other code-stripping utilities: If your setter/getter functions are not used anywhere else in the code, make sure you tell the utility to leave the functions there, because otherwise they may get stripped out. The binding during animation creation is very loose and these utilities have no way of knowing that these functions will be required at runtime.)

View properties

The observant reader, or at least the ones that have not yet browsed on to some other article, may have pinpointed a flaw in the system thus far. If the new animation framework revolves around animating properties, and if animations will be used to animate, to a large extent, View objects, then how can they be used against the View class, which exposes none of its properties through set/get functions?

Excellent question: you get to advance to the bonus round and keep reading.

The way it works is that we added new properties to the View class in Honeycomb. The old animation system transformed and faded View objects by just changing the way that they were drawn. This was actually functionality handled in the container of each View, because the View itself had no transform properties to manipulate. But now it does: we've added several properties to View to make it possible to animate Views directly, allowing you to not only transform the way a View looks, but to transform its actual location and orientation. Here are the new properties in View that you can set, get and animate directly:

  • translationX and translationY: These properties control where the View is located as a delta from its left and top coordinates which are set by its layout container. You can run a move animation on a button by animating these, like this: ObjectAnimator.ofFloat(view, "translationX", 0f, 100f);.
  • rotation, rotationX, and rotationY: These properties control the rotation in 2D (rotation) and 3D around the pivot point.
  • scaleX and scaleY: These properties control the 2D scaling of a View around its pivot point.
  • pivotX and pivotY: These properties control the location of the pivot point, around which the rotation and scaling transforms occur. By default, the pivot point is centered at the center of the object.
  • x and y: These are simple utility properties to describe the final location of the View in its container, as a sum of the left/top and translationX/translationY values.
  • alpha: This is my personal favorite property. No longer is it necessary to fade out an object by changing a value on its transform (a process which just didn't seem right). Instead, there is an actual alpha value on the View itself. This value is 1 (opaque) by default, with a value of 0 representing full transparency (i.e., it won't be visible). To fade a View out, you can do this: ObjectAnimator.ofFloat(view, "alpha", 0f);

Note that all of the "properties" described above are actually available in the form of set/get functions (e.g., setRotation() and getRotation() for the rotation property). This makes them both possible to access from the animation system and (probably more importantly) likely to do the right thing when changed. That is, you don't want to scale an object and have it just sit there because the system didn't know that it needed to redraw the object in its new orientation; each of the setter functions takes care to run the appropriate invalidation step to make the rendering work correctly.

AnimatorSet

This class, like the previous AnimationSet, exists to make it easier to choreograph multiple animations. Suppose you want several animations running in tandem, like you want to fade out several views, then slide in other ones while fading them in. You could do all of this with separate animations and either manually starting the animations at the right times or with startDelays set on the various delayed animations. Or you could use AnimatorSet to do all of that for you. AnimatorSet allows you to animations that play together, playTogether(Animator...), animations that play one after the other, playSequentially(Animator...), or you can organically build up a set of animations that play together, sequentially, or with specified delays by calling the functions in the AnimatorSet.Builder class, with(), before(), and after(). For example, to fade out v1 and then slide in v2 while fading it, you could do something like this:

    ObjectAnimator fadeOut = ObjectAnimator.ofFloat(v1, "alpha", 0f);
ObjectAnimator mover = ObjectAnimator.ofFloat(v2, "translationX", -500f, 0f);
ObjectAnimator fadeIn = ObjectAnimator.ofFloat(v2, "alpha", 0f, 1f);
AnimatorSet animSet = new AnimatorSet().play(v2).with(v2).after(v1);
animSet.start();

Like ValueAnimator and ObjectAnimator, you can create AnimatorSet objects in XML resources as well.

TypeEvaluator

I wanted to talk about just one more thing, and then I'll leave you alone to explore the code and play with the API demos. The last class I wanted to mention is TypeEvaluator. You may not use this class directly for most of your animations, but you should that it's there in case you need it. As I said earlier, the system knows how to animate float and int values, but otherwise it needs some help knowing how to interpolate between the values you give it. For example, if you want to animate between the Point values in one of the examples above, how is the system supposed to know how to interpolate the values between the start and end points? Here's the answer: you tell it how to interpolate, using TypeEvaluator.

TypeEvaluator is a simple interface that you implement that the system calls on each frame to help it calculate an animated value. It takes a floating point value which represents the current elapsed fraction of the animation and the start and end values that you supplied when you created the animation and it returns the interpolated value between those two values at that fraction. For example, here's the built-in FloatEvaluator class used to calculate animated floating point values:

    public class FloatEvaluator implements TypeEvaluator {
public Object evaluate(float fraction, Object startValue, Object endValue) {
float startFloat = ((Number) startValue).floatValue();
return startFloat + fraction * (((Number) endValue).floatValue() - startFloat);
}
}

But how does it work with a more complex type? For an example of that, here is an implementation of an evaluator for the Point class, from our earlier example:

    public class PointEvaluator implements TypeEvaluator {
public Object evaluate(float fraction, Object startValue, Object endValue) {
Point startPoint = (Point) startValue;
Point endPoint = (Point) endValue;
return new Point(startPoint.x + fraction * (endPoint.x - startPoint.x),
startPoint.y + fraction * (endPoint.y - startPoint.y));
}
}

Basically, this evaluator (and probably any evaluator you would write) is just doing a simple linear interpolation between two values. In this case, each 'value' consists of two sub-values, so it is linearly interpolating between each of those.

You tell the animation system to use your evaluator by either calling the setEvaluator() method on ValueAnimator or by supplying it as an argument in the Object version of the factory method. To continue our earlier example animating Point values, you could use our new PointEvaluator class above to complete that code:

    Point p0 = new Point(0, 0);
Point p1 = new Point(100, 200);
ValueAnimator anim = ValueAnimator.ofObject(new PointEvaluator(), p0, p1);

One of the ways that you might use this interface is through the RGBEvaluator implementation, which is included in the Android SDK. If you animate a color property, you will probably either use this evaluator automatically (which is the case if you create an animator in an XML resource and supply colors as values) or you can set it manually on the animator as described in the previous section.

But Wait, There's More!

There's so much more to the new animation system that I haven't gotten to. There's the repetition functionality, the listeners for animation lifecycle events, the ability to supply multiple values to the factory methods to get animations between more than just two endpoints, the ability to use the Keyframe class to specify a more complex time/value sequence, the use of PropertyValuesHolder to specify multiple properties to animate in parallel, the LayoutTransition class for automating simple layout animations, and so many other things. But I really have to stop writing soon and get back to working on the code. I'll try to post more articles in the future on some of these items, but also keep an eye on my blog at graphics-geek.blogspot.com for upcoming articles, tutorials, and videos on this and related topics. Until then, check out the API demos, read the overview of Property Animation posted with the 3.0 SDK, dive into the code, and just play with it.

URL: http://android-developers.blogspot.com/2011/02/animation-in-honeycomb.html

[Gd] Amping Up Chrome’s Background Feature

| More

Chromium Blog: Amping Up Chrome’s Background Feature

Many users rely on apps to provide timely notifications for things like calendar events and incoming chat messages, but find it cumbersome to always keep a Chrome window open. Extensions and packaged apps can already display notifications and maintain state without any visible windows, using background pages. This functionality is now available to hosted apps - the most common form of apps in the Chrome Web Store - via a new background window mechanism.

Apps and extensions that use the new “background” feature can continue to run in the background—even if the user closes down all of Chrome’s windows. “Background apps” will continue to run until Chrome exits. The next time Chrome starts up, any background windows that were previously running will also be re-launched. These windows are not going to be visible but they will be able to perform tasks like checking for server-side changes and pre-emptively loading content into local storage.


One way you can use background windows is to preload content and data so that they are immediately available when the user opens your app. You could also issue HTML5 notifications to alert the user when important events occur—for example, a friend wants to initiate a chat session. There are plenty of possibilities here, and we look forward to seeing what you’ll do.

To protect our users’ privacy, we’ve made this functionality available only to apps and extensions; regular websites will not be able to open background windows. Developers will also need to declare the “background” capability on their apps.

Users can easily see which background apps (and extensions) are running in their system through the “Background Apps” menu of the Chrome icon in the system tray (Windows/Linux) or dock (Mac). Chrome will automatically load background components when the user logs in, and the Chrome icon will remain in the system tray or dock as long as background apps are running- even if all Chrome windows are closed. To close all background components, a user just needs to exit Chrome.

The feature is already available in Chrome’s Dev channel. For details on the API, check out our developer’s guide, which also includes sample apps to try out.

Posted by Andrew Wilson, Software Engineer and Michael Mahemoff, Developer Relations
URL: http://blog.chromium.org/2011/02/amping-up-chromes-background-feature.html

Wednesday, February 23, 2011

[Gd] Chrome Beta Release

| More

Google Chrome Releases: Chrome Beta Release

The Beta channel has been updated to 10.0.648.114 for all platforms.  This release contains stability improvements and UI tweaks.  Full details about the Chrome changes are available in the SVN revision log. If you find new issues, please let us know by filing a bug. Want to change to another Chrome release channel? Find out how.


Jason Kersey
Google Chrome
URL: http://googlechromereleases.blogspot.com/2011/02/chrome-beta-release_23.html

[Gd] [Libraries][Update] jQueryUI 1.8.10

| More

Google AJAX API Alerts: [Libraries][Update] jQueryUI 1.8.10

jQueryUI has been updated to 1.8.10
URL: http://ajax-api-alerts.blogspot.com/2011/02/librariesupdate-jqueryui-1810.html

[Gd] This Code is CRAP

| More

Google Testing Blog: This Code is CRAP

Note: This post is rated PG-13 for use of a mild expletive. If you are likely to be offended by the repeated use a word commonly heard in elementary school playgrounds, please don’t read any further.

CRAP is short for Change Risk Anti-Patterns – a mildly offensive acronym to protect you from deeply offensive code. CRAP was originally developed and launched in 2007 by yours truly (Alberto Savoia) and my colleague and partner in crime Bob Evans.

Why call it CRAP? When a developer or tester has to work with someone else’s (bad) code, they rarely comment on it by saying things like: “The median cyclomatic complexity is unacceptable,” or “The efferent coupling values are too high.” Instead of citing a particular objective metric, they summarize their subjective evaluation and say things like: “This code is crap!” At least those are the words the more polite developers use; I’ve heard and read far more colorful adjectives and descriptions over the years. So Bob and I decided to coin an acronym that, in addition to being memorable – even if it’s for the wrong reasons, is a good match with the language that its intended users use and it’s guaranteed to grab a developer’s attention: “Hey, your code is CRAP!”

But what makes a particular piece of code CRAP? There is, of course, no fool-proof, 100% objective, and accurate way to determine CRAPpiness. However, our experience and intuition – backed by a bit of research and a lot of empirical evidence – suggested the possibility that there are detectable and measurable patterns that indicate the possible presence of CRAPpy code. That was enough to get us going with the first anti-pattern (which I’ll describe shortly.)

Since its inception, the original version of CRAP has gained quite a following; it has been ported to various languages and platforms (e.g. Java, .NET, Ruby, PHP, Maven, Ant) and it’s showing up both in free and commercial code analysis tools such as Hudson’s Cobertura and Atlassian’s Clover. Do a Google search for “CRAP code metric” and you’ll see quite a bit of activity. All of which is making Bob and I feel mighty proud, but we haven’t been resting on our laurels. Well, actually we have done precisely that. After our initial work (which included the Crap4J Eclipse plug-in and the, now mostly abandoned, crap4j.org website) we both went to work for Google and got busy with other projects. However, the success and adoption of CRAP is a good indication that we were on to something and I believe it’s time to invest a bit more in it and move it forward.



Over the next few weeks I will post about the past, present and future of CRAP. By the time I’m done, you will have the tools to:

- Know you CRAP
- Cut the CRAP, and
- Don’t take CRAP from nobody!

I’ll finish today’s entry with a bit of background on the original CRAP metric.

A Brief History of CRAP

As the CRAP acronym suggests, there are several possible patterns that make a piece of code CRAPpy, but we had to start somewhere. Here is the first version of the (in)famous formula to help detect CRAPpy Java methods. Let’s call it CRAP1, to make clear that this covers just one of the many interesting anti-patterns and that there are more to come.

CRAP1(m) = comp(m)^2 * (1 – cov(m)/100)^3 + comp(m)

Where CRAP1(m) is the CRAP1 score for a method m, comp(m) is the cyclomatic complexity of m, and cov(m) is the basis path code coverage from automated tests for m.

If CRAP1(m) > 30, we consider the method to be CRAPpy.

This CRAP1 formula did not materialize out of thin air. We arrived at this particular function empirically; it’s the result of a best fit curve achieved through a lot of trial-and-error. At the time we had access to the source code for a large number of open source and commercial Java projects, along with their associated JUnit tests. This allowed us to rank code for CRAPpiness using one formula, ask our colleagues if they agreed and kept iterating until we reached diminishing returns. This way we were able to come up with a curve that was a pretty good fit for the more subjective data we got from our colleagues.

Here’s why we think that CRAP1 is a good anti-pattern to detect. Writing automated tests (e.g., using JUnit) for complex and convoluted code is particularly challenging, so crappy code usually comes with few, if any, automated tests. This means that the presence of automated tests implies not only some degree of testability (which in turn seems to be associated with better, or more thoughtful, design), but it also means that the developers cared enough, knew enough and had enough time to write tests – which is another good sign for the people inheriting the code. These sounded like reasonable assumptions at the time, and the adoption of CRAP1 – especially by the Agile community – reflects that.

Like all software metrics, CRAP1 is neither perfect nor complete. We know very well, for example, that you can have great code coverage and lousy tests. In addition, sometimes complex code is either unavoidable or preferable; there might be instances where a single higher complexity method might be easier to understand than three simpler ones. We are also aware that the CRAP1 formula doesn’t currently take into account higher-order, more design-oriented metrics that are relevant to maintainability (such as cohesion and coupling) – but it’s a start, the plan is to add more anti-patterns.

Use CRAP On Your Project

Even though Bob and I haven't actively developed or maintained Crap4J in the past few years (shame on us!), other brave developers have been busy porting CRAP to all sorts of languages and environments. As a result, there are many versions of the CRAP metric in open source and commercial tools. If you want to try CRAP on your project, the best thing to do is to run a Google search for the language and tools you are currently using.

For example, a search for "crap metric .net" returned several projects, including crap4n and one called crap4net. If you use Clover, here's how you can use it to implement CRAP. PHP? No problem, someone implemented CRAP for PHPUnit. However, apparently nobody has implemented CRAP for COBOL yet ... here's your big chance!

Until the next blog on CRAP, you might enjoy this vintage video on Crap4J. Please note, however, that the Eclipse plug-in shown in the demo does not work with versions of Eclipse newer than 3.3 - we did say it was a vintage video and that Bob and I have been resting on our laurels!

Posted by Alberto Savoia
URL: http://googletesting.blogspot.com/2011/02/this-code-is-crap.html

[Gd] How Google Tests Software - A Brief Interlude

| More

Google Testing Blog: How Google Tests Software - A Brief Interlude

By James Whittaker

These posts have garnered a number of interesting comments. I want to address two of the negative ones in this post. Both are of the same general opinion that I am abandoning testers and that Google is not a nice place to ply this trade. I am puzzled by these comments because nothing could be further from the truth. One such negative comment I can take as a one-off but two smart people (hey they are reading this blog, right?) having this impression requires a rebuttal. Here are the comments:

"A sad day for testers around the world. Our own spokesman has turned his back on us. What happened to 'devs can't test'?" by Gengodo

"I am a test engineer and Google has been one of my dream companies. Reading your blog I feel that Testers are so unimportant at Google and can be easily laid off. It's sad." by Maggi

First of all, I don't know of any tester or developer for that matter being laid off from Google. We're hiring at a rapid pace right now. However, we do change projects a lot so perhaps you read 'taken off a project' to mean something far worse than the reality of just moving to another project. A tester here may move every couple of years or so and it is a badge of honor to get to the point where you've worked yourself out of a job by building robust test frameworks for others to contribute tests to or to pass off what you've done to a junior tester and move on to a bigger challenge. Maggi, please keep the dream alive. If Google was a hostile place for testers, I would be working somewhere else.

Second, I am going to dodge the negative undertones of the developer vs tester debate. Whether developers can test or testers can code seems downright combative. Both types of engineers share the common goal of shipping a product that will be successful. There is enough negativity in this world and testers hating developers seems so 2001.

In fact, I feel a confession coming on. I have had sharp words with developers in the past. I have publicly decried the lack of testing rigor in commercial products. If you've seen me present you've probably witnessed me showing colorful bugs, pointing to the screen and shouting "you missed a spot!" I will admit, that was fun.

Here are some other quotes I have directed at developers:

"You must be smarter than me because I couldn't write this bug if I was trying to."

"What happened, did the compiler get your eye?"

"What do you say to a developer with two black eyes? Nothing, he's already been told twice."

"Did you hear about the developer who locked himself in his car?"

Ah, those were the good old days! But it's 2011 now and I am objective enough to give developers credit when they step up to the plate and do their job. At Google, many have and they are helping to shame the rest into following suit. And this is making bugs harder to find. I waste so little time on low hanging fruit that I get to dig deeper to find the really subtle, really critical bugs. The signal to noise ratio is just a whole lot stronger now. Yes there are fewer developer jokes but this is progress. I have to make myself feel good knowing how many bugs have been prevented instead of how many laughs I can get on stage demonstrating their miserable failures.

This is progress.

And, incidentally developers can test. In some cases far better than testers. Modern testing is about optimizing the places where developers test and where testers test. Getting that mix right means a great product. Getting it wrong puts us back in 2001 where my presentations were a heck of a lot funnier.

In what cases are developers better testers that we are? In what cases are they not only poor testers but we're better off not having them touch the product at all? Well, that's the subject of my next couple of posts. In the meantime...

...Peace.
URL: http://googletesting.blogspot.com/2011/02/how-google-tests-software-brief.html

[Gd] Best Practices for Honeycomb and Tablets

| More

Android Developers Blog: Best Practices for Honeycomb and Tablets

The first tablets running Android 3.0 (“Honeycomb”) will be hitting the streets on Thursday Feb. 24th, and we’ve just posted the full SDK release. We encourage you to test your applications on the new platform, using a tablet-size AVD.

Developers who’ve followed the Android Framework’s guidelines and best practices will find their apps work well on Android 3.0. This purpose of this post is to provide reminders of and links to those best practices.

Moving Toward Honeycomb

There’s a comprehensive discussion of how to work with the new release in Optimizing Apps for Android 3.0. The discussion includes the use of the emulator; most developers, who don’t have an Android tablet yet, should use it to test and update their apps for Honeycomb.

While your existing apps should work well, developers also have the option to improve their apps’ look and feel on Android 3.0 by using Honeycomb features; for example, see The Android 3.0 Fragments API. We’ll have more on that in this space, but in the meantime we recommend reading Strategies for Honeycomb and Backwards Compatibility for advice on adding Honeycomb polish to existing apps.

Specifying Features

There have been reports of apps not showing up in Android Market on tablets. Usually, this is because your application manifest has something like this:

<uses-feature android:name="android.hardware.telephony" />

Many of the tablet devices aren’t phones, and thus Android Market assumes the app is not compatible. See the documentation of <uses-feature>. However, such an app’s use of the telephony APIs might well be optional, in which case it should be available on tablets. There’s a discussion of how to accomplish this in Future-Proofing Your App and The Five Steps to Future Hardware Happiness.

Rotation

The new environment is different from what we’re used to in two respects. First, you can hold the devices with any of the four sides up and Honeycomb manages the rotation properly. In previous versions, often only two of the four orientations were supported, and there are apps out there that relied on this in ways that will break them on Honeycomb. If you want to stay out of rotation trouble, One Screen Turn Deserves Another covers the issues.

The second big difference doesn’t have anything to do with software; it’s that a lot of people are going to hold these things horizontal (in “landscape mode”) nearly all the time. We’ve seen a few apps that have a buggy assumption that they’re starting out in portrait mode, and others that lock certain screens into portrait or landscape but really shouldn’t.

A Note for Game Developers

A tablet can probably provide a better game experience for your users than any handset can. Bigger is better. It’s going to cost you a little more work than developers of business apps, because quite likely you’ll want to rework your graphical assets for the big screen.

There’s another issue that’s important to game developers: Texture Formats. Read about this in Game Development for Android: A Quick Primer, in the section labeled “Step Three: Carefully Design the Best Game Ever”.

We've also added a convenient way to filter applications in Android Market based on the texture formats they support; see the documentation of <supports-gl-texture> for more details.

Happy Coding

Once you’ve held one of the new tablets in your hands, you’ll want to have your app not just running on it (which it probably already does), but expanding minds on the expanded screen. Have fun!

URL: http://android-developers.blogspot.com/2011/02/best-practices-for-honeycomb-and.html

[Gd] Final Android 3.0 Platform and Updated SDK Tools

| More

Android Developers Blog: Final Android 3.0 Platform and Updated SDK Tools


We are pleased to announce that the full SDK for Android 3.0 is now available to developers. The APIs are final, and you can now develop apps targeting this new platform and publish them to Android Market. The new API level is 11.

For an overview of the new user and developer features, see the Android 3.0 Platform Highlights.

Together with the new platform, we are releasing updates to our SDK Tools (r10) and ADT Plugin for Eclipse (10.0.0). Key features include:

  • UI Builder improvements in the ADT Plugin:
    • New Palette with categories and rendering previews. (details)
    • More accurate rendering of layouts to more faithfully reflect how the layout will look on devices, including rendering status and title bars to more accurately reflect screen space actually available to applications.
    • Selection-sensitive action bars to manipulate View properties.
    • Zoom improvements (fit to view, persistent scale, keyboard access) (details).
    • Improved support for <merge> layouts, as well as layouts with gesture overlays.
  • Traceview integration for easier profiling from ADT. (details)
  • Tools for using the Renderscript graphics engine: the SDK tools now compiles .rs files into Java Programming Language files and native bytecode.

To get started developing or testing applications on Android 3.0, visit the Android Developers site for information about the Android 3.0 platform, the SDK Tools, and the ADT Plugin.

URL: http://android-developers.blogspot.com/2011/02/final-android-30-platform-and-updated.html

Tuesday, February 22, 2011

[Gd] Extending the Omnibox

| More

Chromium Blog: Extending the Omnibox

One of the most powerful aspects of Google Chrome is the omnibox, also known as the address bar. You can type URLs and searches into one unified place and it all just works. With the new omnibox API, extension developers can make the omnibox even more powerful.

The omnibox API lets extension developers add their own keyword command to the omnibox. When the user types a query prefixed by this keyword, the extension can suggest potential completions and react to the user's input.

For example, this extension lets you search and switch between your open tabs from the omnibox:



Keep an eye out for cool new extensions as developers get their hands on this API!

Posted by Matt Perry, Software Engineer
URL: http://blog.chromium.org/2011/02/extending-omnibox.html

[Gd] Got ideas? We're listening.

| More

Google Custom Search: Got ideas? We're listening.

In the past several months we’ve added several new features to Google Custom Search – and we have you to thank! More than a year ago, we told you about a new Google Custom Search Product Ideas page, and since then you’ve voted thousands of times on all sorts of great ideas for improving the product. That doesn't even include the stellar suggestions we get on a regular basis in the help forum. In fact, query autocompletion was a help forum suggestion from swoodby that’s now available with just a few clicks in the Control Panel. We’re thrilled that we have this productive feedback loop with you, and want to report back on some of the product iterations we made during the past year.

Wireless data consumption has more than doubled every year, so we’re happy to have added mobile search features to the product. As requested on the Product Ideas page, users can now search on your website using their mobile devices. The default homepage for your custom search engine is now optimized for your on-the-go users. We will continue to optimize Custom Search to meet the needs of a growing mobile user base.

In response to your requests for metadata capabilities, we launched a set of features to support structured custom search. You now have the ability to filter by attributes such as author, define attribute ranges such as dates, and sort by specific attribute values such as ratings. We plan to make these metadata features even easier to use through the Custom Search Element, which generates code that you can copy and paste to easily add Custom Search to any website.

You’ve also made it clear from your feedback that you love customizing your search engine and adding your own flair. So, over time we’ve made it possible for you to tweak the layout of your results, customize your synonyms, control autocompletions, and apply custom styles to your search engine. Now it’s even possible to select a theme for your ads.

What’s the moral of the story here? Your mic is on and we’re listening. Keep the feedback coming in the help forum (the Product Ideas page is closed for now) and we’ll continue working to make Custom Search better. After all, it’s really your product.

Posted by: Kelly Fee, Associate, Consumer Operations
URL: http://googlecustomsearch.blogspot.com/2011/02/got-ideas-were-listening.html