Saturday, June 12, 2010

[Gd] Download Count Problems

| More

Android Developers Blog: Download Count Problems

Something is apparently wrong in the Android Market. We are getting multiple reports of erroneous download counts. The right people are aware of the situation and are working on it.

URL: http://android-developers.blogspot.com/2010/06/download-count-problems.html

Friday, June 11, 2010

[Gd] Google Videos best practices

| More

Official Google Webmaster Central Blog: Google Videos best practices

Webmaster Level: All

We'd like to highlight three best practices that address some of the most common problems found when crawling and indexing video content. These best practices include ensuring your video URLs are crawlable, stating what countries your videos may be played in, and that if your videos are removed, you clearly indicate this state to search engines.

  • Best Practice 1: Verify your video URLs are crawlable: check your robots.txt
    • Sometimes publishers unknowingly include video URLs in their Sitemap that are robots.txt disallowed. Please make sure your robots.txt file isn't blocking any of the URLs specified in your Sitemap. This includes URLs for the:
      • Playpage
      • Content and player
      • Thumbnail
      More information about robots.txt.

  • Best Practice 2: Tell us what countries the video may be played in
    • Is your video only available in some locales? The optional attribute “restriction” has recently been added (documentation at http://www.google.com/support/webmasters/bin/answer.py?answer=80472), which you can use to tell us whether the video can only be played in certain territories. Using this tag, you have the option of either including a list of all countries where it can be played, or just telling us the countries where it can't be played. If your videos can be played everywhere, then you don't need to include this.

  • Best Practice 3: Indicate clearly when videos are removed -- protect the user experience
    • Sometimes publishers take videos down but don't signal to search engines that they've done so. This can result in the search engine's index not accurately reflecting content of the web. Then when users click on a search result, they're taken to a page either indicating that the video doesn't exist, or to a different video. Users find this experience dissatisfying. Although we have mechanisms to detect when search results are no longer available, we strongly encourage following community standards.

      To signal that a video has been removed,
      1. Return a 404 (Not found) HTTP response code, you can still return a helpful page to be displayed to your users. Check out these guidelines for creating useful 404 pages.
      2. Indicate expiration dates for each video listed in a Video Sitemap (use the <video:expiration_date> element) or mRSS feed (<dcterms:valid> tag) submitted to Google.
For more information on Google Videos please visit our Help Center, and to post questions and search answers check out our Help Forum.

Posted by Nelson Lee, Product Manager, Video Search
URL: http://googlewebmastercentral.blogspot.com/2010/06/google-videos-best-practices.html

[Gd] DevFest Tour - Coming to a City Near You

| More

Google Code Blog: DevFest Tour - Coming to a City Near You



Just last month we had Google I/O and although we had attendees from all over the world join us for the festivities, we know that most of you could not join us in San Francisco. To help make up for that, we decided to do a DevFest tour and recently announced that we were on our way to visit Australia (Sydney), Israel, and Southeast Asia (Manila, Singapore, and Kuala Lumpur) in the next couple months. Then, there’s Spain (Madrid), Argentina (Buenos Aires), and Chile (Santiago) in October. Here’s your chance to hear about your favorite Google technologies and interact with the Googlers that work on them every day.

Today, we’ve updated the site to include the cities we’re visiting and topics we’d like to cover, along with registration links for the first round of events. Space is limited at each location and we cannot guarantee that everyone will be able to secure a spot so register early, check your email for confirmation and check back for any event updates.

For many of our international speakers, this is their first time visiting most of the cities on our tour, and they're incredibly excited to meet the local developer communities and learn what you're doing with our technologies - or what you're thinking of doing.

We hope to see you there!

By Christine Songco, Developer Relations
URL: http://googlecode.blogspot.com/2010/06/devfest-tour-coming-to-city-near-you.html

[Gd] Xobni Brings Gmail Contextual Gadgets to Outlook – with Zero Work for Gadget Developers

| More

Google Apps Developer Blog: Xobni Brings Gmail Contextual Gadgets to Outlook – with Zero Work for Gadget Developers

Editor's Note: Jeff Bonforte is CEO at Xobni. Xobni gives you instant access to email, contact information, conversations and attachments that are often lost in exploding inboxes. Xobni has leveraged Gmail Contextual Gadgets to make your email more  responsive. We invited Xobni to share their experience with the Gmail Contextual Gadgets.

The new Gmail contextual gadgets platform announced at I/O last month is bringing renewed innovation to the inbox. Using this new simple but powerful platform, developers can write new innovative gadgets for users of Google Apps and Gmail. But to offer the same gadgets to Outlook’s 600 million users, a developer would need to do quite a bit of extra work.

They would need to be very familiar with COM, know about CBT hooks, have mastery of runtime callable wrappers (RCWs) and Outlook’s primary interop assembly (PIA). They should know MAPI inside and out. They should be experts in Windows forms and Redemption. They should understand the requirements of the new 64-bit support needed for Outlook 2010. They should understand the nine APIs of Outlook, including OOM (introduced in 2007) and OSC (introduced in 2010). They should develop in .Net, and have a massive variety of QA virtual machines running every version of Win XP to Win 7, with every version and service pack of Outlook 2003 to 2010, including a large variety of the top 400 add-ins to Outlook, and every version of Redemption (running with each version installed in various order to trap on likely conflicts).

And even the most experienced engineer in these technologies will need to reference Charles Petzold’s seminal Programming Windows 95 (out of print, 1996, Microsoft Press) and De la Curz Thaler’s Inside MAPI (out of print, 1996, Microsoft Press).

Given these non-trivial hurdles, we were pretty sure most developers would forgo the opportunity to offer their clever new gadgets to Outlook’s millions of users. That is a shame.

So we got on the case to bring this innovation from Gmail gadget developers to Outlook users. Enter Xobni.

Xobni, (“inbox” spelled backwards) is a San Francisco startup that offers and easier and better way to manage the people and information in email and on your phone. Our free Outlook search and contact manager add-in for Outlook has been downloaded over five millions times in the last two years. We focused early on Outlook for two simple reasons: size and pain.

When the team at Google called to encourage us to bring our Xobni app to Gmail in time for Google I/O, we were thrilled. It has long been the #1 request of our customers…and engineers.  It didn’t take much work for us to realize the simplicity and power of this platform.  In fact, we had our first gadget for Hoover’s business information written in about a day. Though we are excited to write gadgets, we wanted to do something bigger. We wanted Gmail gadgets to run natively for the millions of people using Outlook.

So we set out to port over Google’s platform to Outlook in a ridiculously short amount of time. The first step was to get Google on board.  We weren’t sure what to expect from them when we explained our plan. The first response we got from the Google team was puzzlement. Why and how would we do this? In a short amount of time, Google’s mood progressed from quiet to excited (phew).  So we set up the war room in the office, cleared our calendars and weekends for the foreseeable future and started cranking away. (And, yes, even with our Outlook expertise, we frequently referenced Programming Windows 95 and Inside MAPI along the way.)

The result: Developers can now write one application for Gmail contextual gadgets and will soon deploy not just to the millions of Gmail users, but also to the millions of Outlook users: the same code available in both worlds. Thanks to Google’s simple but powerful platform (and the hard work of Xobni’s engineers), you just write your gadgets for Gmail and they are ready to be used in Outlook as well.

Gadgets in Outlook

Want to get started? Download the Xobni for Outlook Developer Preview today. It includes a “Welcome Gadget” with sample code, so you can see your Gmail gadgets in Outlook. We plan to release an updated version of Xobni that will allow end users to start enjoying your gadgets. 

We’d love to hear from you. Let us know what you think and how we can continue to make email a better place to be.

Author: Jeff Bonforte, CEO Xobni

URL: http://googleappsdeveloper.blogspot.com/2010/06/xobni-brings-gmail-contextual-gadgets.html

[Gd] Blogging Round the World

| More

Android Developers Blog: Blogging Round the World

It seems that once or twice a week, I run across an Android-developer-oriented site that I hadn’t previously noticed. There are already a few aggregators and directories, and I think we’re going to need more. But for the moment, here are three pieces of bloggy Android goodness, from Florida, Odessa (Ukraine!), and Sydney. What they have in common is that I previously hadn't encountered any of them.

Font Magic

This is from Florida-based Jeff Vera’s Musings of the Bare Bones Coder, which, although it advertises itself as being about “Coding and managing in the .NET space”, recently ran the excellent Android Development – Using Custom Fonts. You’ve always been able to use your own fonts in your own apps, but the how-to coverage has been light.

How Hot Is It?

Ivan Memruk from Odessa, Ukraine, brings us Mind The Robot, which has a refreshing concern for visual elegance. Speaking of which, soak up the analog steampunk tastiness of Android Custom UI: Making a Vintage Thermometer.

Aussie Rules

In this case, I mean rules for getting your Android project set up for use both via Eclipse and command-line Ant. Daniel Ostermeier and Jason Sankey from Sydney run the Android-dense a little madness, and lay the rules out in Setting Up An Android Project Build. Lots of steps; but very handy for a command-line guy like me.

URL: http://android-developers.blogspot.com/2010/06/blogging-round-world.html

Thursday, June 10, 2010

[Gd] Sharing Code - Online and Offline

| More

Google Web Toolkit Blog: Sharing Code - Online and Offline

Today's guest blog post comes from Matthias Büchner, Software Engineer at digital security company, Gemalto along with his colleagues: Colin Gormley, Jonas Pärt and Ella Segura. Here they discuss their Device Administration Service, and the recent port to GWT.



Introduction


The Device Administration Service (DAS) solution allows you to manage the personalization and lifecycle of smart card devices. It was developed using ASP .NET and native JavaScript as a hosted web application. Our team was asked to port the solution for deployment as both a hosted and an offline standalone system.




We had three major goals for the re-design:





  1. Use the same code base for both the hosted and offline solutions. A single code base would reduce the cost of maintenance and support in the future without increasing too much the upfront cost of development. In order to achieve this we needed a solution that was not dependent upon any server based technology.

  2. Improve testability. One of the biggest problems for the existing DAS solution was the lack of sufficient automated unit and functional tests. A large amount of time and resources was required to manually validate the solution.

  3. Enhance usability. The existing DAS was still using the submit/refresh page approach of web 1.0 and definitely needed modernizing. The re-design was a good opportunity to select a technology that offers a more desktop-like experience.


Here was our strategy:



  1. Identify which features require a serve-side component. These were going to be our problem areas.
  2. Look for a solution that will run inside a browser offline. We wanted to keep the interface the same for both hosted and standalone deployment.

  3. Select our technology carefully, keeping in mind testability and cross-browser compatibility.





GWT was the obvious choice. It addresses all the goals we had outlined, plus we really looked forward to writing Java code again after living and breathing complex native JavaScript for so long. The prospect of using JUnit had us practically salivating. In this article, we will discuss how we designed our application with GWT and give a few tips that we learned while developing it.



A Smart Card Driven Architecture




A smart card is a secured electronic chip typically embedded in a plastic card or inside a USB dongle. Two examples of what a smart card can do are managing physical and logical access, and securing electronic transactions. Despite their small size, they offer a self-contained operating system and can run very advanced applications. The card is connected to the PC using a smart card reader and the low level communication between the reader and the smart card is done via an APDU.




Traditionally, you are required to install some platform-specific software on your computer before you can communicate with a smart card. This software is known as middleware, and typically takes up a large footprint and must be developed for specific environments. Gemalto developed a browser extension, called SConnect™ , which allows communication with a smart card using JavaScript. SConnect is available on most major operating systems and browsers. SConnect™ is to the smart card what AJAX is to the web. SConnect communicates asynchronously with a smart card, eliminating latency problems on the user interface. We anticipated that this feature would have a strong impact on our design approach.




We investigated the GwtEvent class and its callback mechanism to manage the asynchronous communications. An event is fired just once, and it is handled by one or more handlers. The one-to-many model fits perfectly with how we wanted to manage the notification mechanism when the card is either inserted or removed from the reader. We created card insertion and removal events and utilized an event bus to manage the event firing. Unfortunately, we found difficult to follow and debug event-driven code.




Callback mechanisms are better suited for one-to-one relationships between the callers and the callees. We opted to use callbacks to manage most of the business and low level logic, since they are mostly one-to-one relationships.




When several callbacks, that required data sharing, were chained in sequence things got a little ugly. We started with nested callbacks. As you can see in the AsyncCallbackExample class, the code flow can be difficult to follow and decipher. To solve the problem, we created the following Wrapper class. This class allows us to create a final instance of the data we need to share. Separate callbacks no longer had to be nested to access and change the data we are passing around.



 
public class Wrapper {
    private T value;

    public Wrapper() {}

    public Wrapper( T value ) {
        this.value = value;
    }

    public void setValue( T value ) {
        this.value = value;
    }

    public T getValue() {
        return value;
    }
}


This class allowed us to improve the readability and ease the maintenance of our code. You can see in details how to use it in our class example.



Rebuilding the User Interface




Our goal was to move all our source code to Java/GWT; this includes the entire user interface that was designed originally with ASP controls. We looked for an existing widget library since we did not want to spend a lot of time developing custom components. ExtJs GWT was a library that some of us were already familiar with, so it was a natural choice.



From a bird's eye view, most of the interface looks the same. Take a closer look, and you'll find that much of the details have been improved.







Data are loaded and updated asynchronously. Status messages and popup dialogs are now more like a desktop application. One of our goals was to keep the source code modular and easy to maintain. We were all excited at the prospect of writing code in a single language. The benefits of moving from source code written in ASP .NET + native JavaScript + CSS + HTML to Java are many. We especially were attracted to compile time checks, unit tests and writing all our code in a professional IDE.




We knew we wanted to structure our code in a way that would allow us to decouple the interface from the business logic. The video from Google I/O 2009 on best practices for architecting GWT app pointed us in the right direction. We adopted the MVP pattern and divided our UI code into presenters and displays using the gwt-presenter library.




In the DAS application, there is more than one presenter at work on any single screen. We needed the presenters to interact with each other but did not want them too tightly coupled. To address this issue, we brought in the event bus. Instead of presenters listening to events being fired by other presenters, now everyone alerts the event bus when something significant happens and registers themselves to the event bus for any events they are interested in.




The DAS application is currently localized for three languages. Implementing the Constants and Messages interfaces was very straightforward. We put all the text in an Excel spreadsheet for delivery to our translators and converted it into Property files.




We hit a small bump in the road when we realized that GWT does not take language settings of the browser into account for localization. You have to pass in ?local=xx in the URL when launching the application. To solve this problem, we had to take two approaches. For the hosted solution, we can easily parse the request header for language information and pass that into GWT using the Meta tags. For the standalone solution, where we cannot send any HTTP request, we settled for reading the system language settings from JavaScript and passing that information into GWT.



A Testing Strategy




There is a consensus in the developer community on the great benefits that unit tests provide. Unfortunately there is no good JavaScript unit test framework. In the ASP .NET version of the project, validation was done manually. A device configuration issue or similar problem would put the result off its intended target. Achieving measurable test coverage was not possible. Porting our application to GWT helped solve this problem. We just lost our last argument for not writing unit tests.




One aspect of the DAS user interface which is different from traditional web applications is that we have to interact with a smart card. Almost every function of the application requires a smart card to be inserted and some features require two smart cards. The smart card itself is a difficult device to test, it needs to be inserted into a smart card reader and much of the website is driven by smart card removal and insertions. Automating the device would require some sort of smart card or reader emulator that would trick the system into believing that the device was inserted. The technology involved is very low level, and emulating the interface is not a trivial task. We had to look for another solution.



Here is an overview of our application architecture.








We abstracted the Smart Card Layer to mock device interactions to higher layers and kept some of the manual (or semi-manual) tests for the lower layers. It divided out tests into two categories; unit tests and integration tests. Unit tests would use mocked Smart Card layer while the integration tests would require real smart cards to be inserted into readers during test execution.



Designing the Smart Card Layer




To enhance the testability of our code, we wanted to be able to mock parts of it that requires a smart card. To be mock-able, the Smart Card Layer needed a contract with the upper layer, we put in place an abstract base class that would expose functional methods and at the same time lock down how the result and potential errors were to be reported. Any class inheriting from the base class would be responsible for the device interaction.



Here is an extract of the base class:



 
protected abstract void getSerialNumber();

public final void getSerialNumber(AsyncServiceCallback callback) {
    ...
    getSerialNumber();
    ...
}

protected void onGetSerialNumberFailure(ServiceError error) {
    getSerialNumberCallback.onFailure(new GetSerialNumberResult(error));
}

protected void onGetSerialNumberSuccess(String serialNumber) {
    getSerialNumberCallback.onSuccess(new GetSerialNumberResult(serialNumber));
}



This contract would allow us to write an implementation for every type and variation of smart cards we needed to support, with the added bonus of allowing us to add a mock smart card layer implementation. The mock smart card implementation doesn't actually perform any smart card operations. Instead, we configure in the setup code of a unit test the expected input and return values for the test case.



Here is an extract of the mocked class extending the abstract class shown above.



 
public MockConfiguration getSerialNumberConfig = new MockConfiguration();

@Override
protected void getSerialNumber() {
    getSerialNumberConfig.execute(new RunMe(){
    @Override
    public void run() {
        if (getSerialNumberConfig.isSuccess()) {
                onGetSerialNumberSuccess(getSerialNumberConfig.returnValue);
            } else {
                onGetSerialNumberFailure(getSerialNumberConfig.serviceError);
            }
        }
    });
}


Unit tests




A unit test sets up the underlying mock classes for the test scenario and then makes the call to the subject of the test. These tests allowed us to validate our business logic in different use cases independent of a smart card. In effect we were able to make sure that our application would fail gracefully if the smart card behaved in an unexpected manner.




We used the code coverage tool EMMA to assert that every anticipated (and unanticipated) failure condition worked properly.



Here is how we configure the mocked class:



 
ConfigurableMscmMock mscmMock = new ConfigurableMscmMock();
mscmMock.getSerialNumberConfig.returnValue = "112211221122112211221122";


Using the above techniques, we exceeded our expectations of automated test coverage.



Conclusion




Overall, we are impressed and excited about what is possible with GWT. We were able to rebuild a complex browser-based application and deliver a consistent product for both the offline and hosted solutions with improved user experience and testing methodology.




The new DAS is one of the first smart card applications built with GWT at Gemalto. This project has proved to be a great learning experience for the team. We hope the foundation we built here will be a spring board for many more GWT applications to come.



Thank you for reading our article - for more information about DAS, please visit http://www.gemalto.com.

URL: http://googlewebtoolkit.blogspot.com/2010/06/todays-guest-blog-post-comes-from.html

[Gd] Automatically Generate Maps and Directions with Google Apps Script

| More

Google Apps Developer Blog: Automatically Generate Maps and Directions with Google Apps Script

Following up on our recent post about the new Doc List capability in Google Apps Script, we thought we’d take a moment to look a little more closely at another new feature in Apps Script - integration with Google Maps. Google Maps has had an API for quite some time, but now we’ve made it very easy to generate customized maps and driving directions straight from a script.


A mail merge is often used to automate some of the drudgery involved in sending invitations to a large number of people. With the new Apps Script Maps Service you can easily add a map image to the email, and even add a marker showing the event location.


While that’s nice, it’s hardly a time saver - why write code to generate the same image repeatedly? A much more useful feature is generating a custom map for each guest, along with personalized driving directions. We’ve made a spreadsheet template that includes just such a script - it’s available here.


Let’s look at a short code snippet that illustrates the calls required to generate a map image and add a marker at the start and end addresses:


function getMap (start, end) {

// Generate personalized static map.

var directions = Maps.newDirectionFinder()

.setOrigin(start)

.setDestination(end)

.getDirections();

var map = Maps.newStaticMap().setSize(500, 350);

map.setMarkerStyle(Maps.StaticMap.MarkerSize.MID,
"red"
, null);

map.addMarker(start);

map.addMarker(end);

return map.getMapUrl()

}


Running the function with start and end set to Google, San Francisco and Google, Mountain View, we get a link to the following image:





Along with generating maps images, the Maps feature can also find directions, retrieve elevation data and even perform some geocoding operations. Please note that any data returns by these APIs should not be used without displaying an associated map image - see the Google Maps API Terms of Service for more details.


Posted by Evin Levey, Google Apps Script Product Manager

URL: http://googleappsdeveloper.blogspot.com/2010/06/automatically-generate-maps-and.html

[Gd] Page Speed for ads and trackers

| More

Google Code Blog: Page Speed for ads and trackers

At Google, we're passionate about making the web faster. To help web page owners optimize their pages for speed, we open-sourced the Page Speed web performance tool a year ago. Today, we're excited to launch a new Page Speed feature: Page Speed for ads, such as display and rich media ads, and trackers, also known as analytics.

Page Speed now enables developers to run a performance analysis of the ads, the trackers, or the remaining content of the page. Web developers can use Page Speed to determine how ads and trackers impact the performance of their web pages, and ad and tracker providers can use this feature to tune their services for speed.

For instance, when analyzing an example web page, Page Speed displays several suggestions that we can apply to make the page faster:


But which of these suggestions applies to the content on the page that we authored? Which apply to the ads and trackers? Using the "Analyze"menu, we can determine that, in this example, the ads are contributing to slowing down the page:


When we switch to analyze the content of the page, the score for the page improves to 93. We can in this case enable compression for the resource that is served uncompressed currently.


We hope that you try these and other new features and rules of Page Speed and find them useful to further optimize the speed of your web pages.

Please share your experience using this new feature in our discussion forum.

By Bryan McQuade, Page Speed team
URL: http://googlecode.blogspot.com/2010/06/page-speed-for-ads-and-trackers.html

[Gd] Datastore Outage and Unapplied Writes Issue

| More

Google App Engine Blog: Datastore Outage and Unapplied Writes Issue

We have completed the post-mortem on the Datastore outage App Engine experienced two weeks ago on May 25th. We greatly appreciate your patience in this.



A very small percentage of apps (approximately 2%) have been affected by unapplied writes as a result of the outage. “Unapplied writes” are writes that did not get replicated to the secondary Datastore before completing the fail over process. We want to stress that these unapplied writes do not impact the transactional consistency and have not corrupted application data. Instead you can think of them as causing the mirror image between the primary and secondary datastore to be out of sync.



We have emailed administrators of all affected applications to let them know that they should take action. If you do not receive an email, there is no action for you to take. If your application does have unapplied writes, all of the data has been recovered and reinserted into your application’s Datastore as separately labeled entities. We have developed new tools in the Datastore to re-integrate unapplied writes and we have also provided a support email address appengine-unapplied-writes@google.com to help you work one-on-one with the App Engine team as well. For more information on unapplied writes, please see the Unapplied Writes FAQ.



Although this is unrelated to the ongoing Datastore latency issues, we continue to work diligently on a solution for those as well. We expect to have an update early next week on our progress which will be posted on this blog, so please stay tuned.



Posted by the App Engine Team
URL: http://googleappengine.blogspot.com/2010/06/datastore-outage-and-unapplied-writes.html

[Gd] AdWords Downtime: June 12, 10am-2pm PDT

| More

AdWords API Blog: AdWords Downtime: June 12, 10am-2pm PDT

We'll be performing routine system maintenance on Saturday, June 12 from approximately 10:00am to 2:00pm PDT. You won't be able to access AdWords or the API during this time frame, but your ads will continue to run as normal.

Best,
- Eric Koleda, AdWords API Team
URL: http://adwordsapi.blogspot.com/2010/06/adwords-downtime-june-12-10am-2pm-pdt.html

Wednesday, June 9, 2010

[Gd] Making Sense of Multitouch

| More

Android Developers Blog: Making Sense of Multitouch


[This post is by Adam Powell, one of our more touchy-feely Android engineers. — Tim Bray]

The word “multitouch” gets thrown around quite a bit and it’s not always clear what people are referring to. For some it’s about hardware capability, for others it refers to specific gesture support in software. Whatever you decide to call it, today we’re going to look at how to make your apps and views behave nicely with multiple fingers on the screen.

This post is going to be heavy on code examples. It will cover creating a custom View that responds to touch events and allows the user to manipulate an object drawn within it. To get the most out of the examples you should be familiar with setting up an Activity and the basics of the Android UI system. Full project source will be linked at the end.

We’ll begin with a new View class that draws an object (our application icon) at a given position:

public class TouchExampleView extends View {
private Drawable mIcon;
private float mPosX;
private float mPosY;

private float mLastTouchX;
private float mLastTouchY;

public TouchExampleView(Context context) {
this(context, null, 0);
}

public TouchExampleView(Context context, AttributeSet attrs) {
this(context, attrs, 0);
}

public TouchExampleView(Context context, AttributeSet attrs, int defStyle) {
super(context, attrs, defStyle);
mIcon = context.getResources().getDrawable(R.drawable.icon);
mIcon.setBounds(0, 0, mIcon.getIntrinsicWidth(), mIcon.getIntrinsicHeight());
}

@Override
public void onDraw(Canvas canvas) {
super.onDraw(canvas);

canvas.save();
canvas.translate(mPosX, mPosY);
mIcon.draw(canvas);
canvas.restore();
}

@Override
public boolean onTouchEvent(MotionEvent ev) {
// More to come here later...
return true;
}
}

MotionEvent

The Android framework’s primary point of access for touch data is the android.view.MotionEvent class. Passed to your views through the onTouchEvent and onInterceptTouchEvent methods, MotionEvent contains data about “pointers,” or active touch points on the device’s screen. Through a MotionEvent you can obtain X/Y coordinates as well as size and pressure for each pointer. MotionEvent.getAction() returns a value describing what kind of motion event occurred.

One of the more common uses of touch input is letting the user drag an object around the screen. We can accomplish this in our View class from above by implementing onTouchEvent as follows:

@Override
public boolean onTouchEvent(MotionEvent ev) {
final int action = ev.getAction();
switch (action) {
case MotionEvent.ACTION_DOWN: {
final float x = ev.getX();
final float y = ev.getY();

// Remember where we started
mLastTouchX = x;
mLastTouchY = y;
break;
}

case MotionEvent.ACTION_MOVE: {
final float x = ev.getX();
final float y = ev.getY();

// Calculate the distance moved
final float dx = x - mLastTouchX;
final float dy = y - mLastTouchY;

// Move the object
mPosX += dx;
mPosY += dy;

// Remember this touch position for the next move event
mLastTouchX = x;
mLastTouchY = y;

// Invalidate to request a redraw
invalidate();
break;
}
}

return true;
}

The code above has a bug on devices that support multiple pointers. While dragging the image around the screen, place a second finger on the touchscreen then lift the first finger. The image jumps! What’s happening? We’re calculating the distance to move the object based on the last known position of the default pointer. When the first finger is lifted, the second finger becomes the default pointer and we have a large delta between pointer positions which our code dutifully applies to the object’s location.

If all you want is info about a single pointer’s location, the methods MotionEvent.getX() and MotionEvent.getY() are all you need. MotionEvent was extended in Android 2.0 (Eclair) to report data about multiple pointers and new actions were added to describe multitouch events. MotionEvent.getPointerCount() returns the number of active pointers. getX and getY now accept an index to specify which pointer’s data to retrieve.

Index vs. ID

At a higher level, touchscreen data from a snapshot in time may not be immediately useful since touch gestures involve motion over time spanning many motion events. A pointer index does not necessarily match up across complex events, it only indicates the data’s position within the MotionEvent. However this is not work that your app has to do itself. Each pointer also has an ID mapping that stays persistent across touch events. You can retrieve this ID for each pointer using MotionEvent.getPointerId(index) and find an index for a pointer ID using MotionEvent.findPointerIndex(id).

Feeling Better?

Let’s fix the example above by taking pointer IDs into account.

private static final int INVALID_POINTER_ID = -1;

// The ‘active pointer’ is the one currently moving our object.
private int mActivePointerId = INVALID_POINTER_ID;

// Existing code ...

@Override
public boolean onTouchEvent(MotionEvent ev) {
final int action = ev.getAction();
switch (action & MotionEvent.ACTION_MASK) {
case MotionEvent.ACTION_DOWN: {
final float x = ev.getX();
final float y = ev.getY();

mLastTouchX = x;
mLastTouchY = y;

// Save the ID of this pointer
mActivePointerId = ev.getPointerId(0);
break;
}

case MotionEvent.ACTION_MOVE: {
// Find the index of the active pointer and fetch its position
final int pointerIndex = ev.findPointerIndex(mActivePointerId);
final float x = ev.getX(pointerIndex);
final float y = ev.getY(pointerIndex);

final float dx = x - mLastTouchX;
final float dy = y - mLastTouchY;

mPosX += dx;
mPosY += dy;

mLastTouchX = x;
mLastTouchY = y;

invalidate();
break;
}

case MotionEvent.ACTION_UP: {
mActivePointerId = INVALID_POINTER_ID;
break;
}

case MotionEvent.ACTION_CANCEL: {
mActivePointerId = INVALID_POINTER_ID;
break;
}

case MotionEvent.ACTION_POINTER_UP: {
// Extract the index of the pointer that left the touch sensor
final int pointerIndex = (action & MotionEvent.ACTION_POINTER_INDEX_MASK)
>> MotionEvent.ACTION_POINTER_INDEX_SHIFT;
final int pointerId = ev.getPointerId(pointerIndex);
if (pointerId == mActivePointerId) {
// This was our active pointer going up. Choose a new
// active pointer and adjust accordingly.
final int newPointerIndex = pointerIndex == 0 ? 1 : 0;
mLastTouchX = ev.getX(newPointerIndex);
mLastTouchY = ev.getY(newPointerIndex);
mActivePointerId = ev.getPointerId(newPointerIndex);
}
break;
}
}

return true;
}

There are a few new elements at work here. We’re switching on action & MotionEvent.ACTION_MASK now rather than just action itself, and we’re using a new MotionEvent action constant, MotionEvent.ACTION_POINTER_UP. ACTION_POINTER_DOWN and ACTION_POINTER_UP are fired whenever a secondary pointer goes down or up. If there is already a pointer on the screen and a new one goes down, you will receive ACTION_POINTER_DOWN instead of ACTION_DOWN. If a pointer goes up but there is still at least one touching the screen, you will receive ACTION_POINTER_UP instead of ACTION_UP.

The ACTION_POINTER_DOWN and ACTION_POINTER_UP events encode extra information in the action value. ANDing it with MotionEvent.ACTION_MASK gives us the action constant while ANDing it with ACTION_POINTER_INDEX_MASK gives us the index of the pointer that went up or down. In the ACTION_POINTER_UP case our example extracts this index and ensures that our active pointer ID is not referring to a pointer that is no longer touching the screen. If it was, we select a different pointer to be active and save its current X and Y position. Since this saved position is used in the ACTION_MOVE case to calculate the distance to move the onscreen object, we will always calculate the distance to move using data from the correct pointer.

This is all the data that you need to process any sort of gesture your app may require. However dealing with this low-level data can be cumbersome when working with more complex gestures. Enter GestureDetectors.

GestureDetectors

Since apps can have vastly different needs, Android does not spend time cooking touch data into higher level events unless you specifically request it. GestureDetectors are small filter objects that consume MotionEvents and dispatch higher level gesture events to listeners specified during their construction. The Android framework provides two GestureDetectors out of the box, but you should also feel free to use them as examples for implementing your own if needed. GestureDetectors are a pattern, not a prepacked solution. They’re not just for complex gestures such as drawing a star while standing on your head, they can even make simple gestures like fling or double tap easier to work with.

android.view.GestureDetector generates gesture events for several common single-pointer gestures used by Android including scrolling, flinging, and long press. For Android 2.2 (Froyo) we’ve also added android.view.ScaleGestureDetector for processing the most commonly requested two-finger gesture: pinch zooming.

Gesture detectors follow the pattern of providing a method public boolean onTouchEvent(MotionEvent). This method, like its namesake in android.view.View, returns true if it handles the event and false if it does not. In the context of a gesture detector, a return value of true implies that there is an appropriate gesture currently in progress. GestureDetector and ScaleGestureDetector can be used together when you want a view to recognize multiple gestures.

To report detected gesture events, gesture detectors use listener objects passed to their constructors. ScaleGestureDetector uses ScaleGestureDetector.OnScaleGestureListener. ScaleGestureDetector.SimpleOnScaleGestureListener is offered as a helper class that you can extend if you don’t care about all of the reported events.

Since we are already supporting dragging in our example, let’s add support for scaling. The updated example code is shown below:

private ScaleGestureDetector mScaleDetector;
private float mScaleFactor = 1.f;

// Existing code ...

public TouchExampleView(Context context, AttributeSet attrs, int defStyle) {
super(context, attrs, defStyle);
mIcon = context.getResources().getDrawable(R.drawable.icon);
mIcon.setBounds(0, 0, mIcon.getIntrinsicWidth(), mIcon.getIntrinsicHeight());

// Create our ScaleGestureDetector
mScaleDetector = new ScaleGestureDetector(context, new ScaleListener());
}

@Override
public boolean onTouchEvent(MotionEvent ev) {
// Let the ScaleGestureDetector inspect all events.
mScaleDetector.onTouchEvent(ev);

final int action = ev.getAction();
switch (action & MotionEvent.ACTION_MASK) {
case MotionEvent.ACTION_DOWN: {
final float x = ev.getX();
final float y = ev.getY();

mLastTouchX = x;
mLastTouchY = y;
mActivePointerId = ev.getPointerId(0);
break;
}

case MotionEvent.ACTION_MOVE: {
final int pointerIndex = ev.findPointerIndex(mActivePointerId);
final float x = ev.getX(pointerIndex);
final float y = ev.getY(pointerIndex);

// Only move if the ScaleGestureDetector isn't processing a gesture.
if (!mScaleDetector.isInProgress()) {
final float dx = x - mLastTouchX;
final float dy = y - mLastTouchY;

mPosX += dx;
mPosY += dy;

invalidate();
}

mLastTouchX = x;
mLastTouchY = y;

break;
}

case MotionEvent.ACTION_UP: {
mActivePointerId = INVALID_POINTER_ID;
break;
}

case MotionEvent.ACTION_CANCEL: {
mActivePointerId = INVALID_POINTER_ID;
break;
}

case MotionEvent.ACTION_POINTER_UP: {
final int pointerIndex = (ev.getAction() & MotionEvent.ACTION_POINTER_INDEX_MASK)
>> MotionEvent.ACTION_POINTER_INDEX_SHIFT;
final int pointerId = ev.getPointerId(pointerIndex);
if (pointerId == mActivePointerId) {
// This was our active pointer going up. Choose a new
// active pointer and adjust accordingly.
final int newPointerIndex = pointerIndex == 0 ? 1 : 0;
mLastTouchX = ev.getX(newPointerIndex);
mLastTouchY = ev.getY(newPointerIndex);
mActivePointerId = ev.getPointerId(newPointerIndex);
}
break;
}
}

return true;
}

@Override
public void onDraw(Canvas canvas) {
super.onDraw(canvas);

canvas.save();
canvas.translate(mPosX, mPosY);
canvas.scale(mScaleFactor, mScaleFactor);
mIcon.draw(canvas);
canvas.restore();
}

private class ScaleListener extends ScaleGestureDetector.SimpleOnScaleGestureListener {
@Override
public boolean onScale(ScaleGestureDetector detector) {
mScaleFactor *= detector.getScaleFactor();

// Don't let the object get too small or too large.
mScaleFactor = Math.max(0.1f, Math.min(mScaleFactor, 5.0f));

invalidate();
return true;
}
}

This example merely scratches the surface of what ScaleGestureDetector offers. The listener methods receive a reference to the detector itself as a parameter that can be queried for extended information about the gesture in progress. See the ScaleGestureDetector API documentation for more details.

Now our example app allows a user to drag with one finger, scale with two, and it correctly handles passing active pointer focus between fingers as they contact and leave the screen. You can download the final sample project at http://code.google.com/p/android-touchexample/. It requires the Android 2.2 SDK (API level 8) to build and a 2.2 (Froyo) powered device to run.

From Example to Application

In a real app you would want to tweak the details about how zooming behaves. When zooming, users will expect content to zoom about the focal point of the gesture as reported by ScaleGestureDetector.getFocusX() and getFocusY(). The specifics of this will vary depending on how your app represents and draws its content.

Different touchscreen hardware may have different capabilities; some panels may only support a single pointer, others may support two pointers but with position data unsuitable for complex gestures, and others may support precise positioning data for two pointers and beyond. You can query what type of touchscreen a device has at runtime using PackageManager.hasSystemFeature().

As you design your user interface keep in mind that people use their mobile devices in many different ways and not all Android devices are created equal. Some apps might be used one-handed, making multiple-finger gestures awkward. Some users prefer using directional pads or trackballs to navigate. Well-designed gesture support can put complex functionality at your users’ fingertips, but also consider designing alternate means of accessing application functionality that can coexist with gestures.

URL: http://android-developers.blogspot.com/2010/06/making-sense-of-multitouch.html

[Gd] Help us improve the developer experience at Google Code

| More

Google Code Blog: Help us improve the developer experience at Google Code

We'd like your feedback about how to make Google Code a more useful destination for developers to find information about using Google's APIs and developer products.

Please take our survey and give us your feedback, it should only take you a few minutes.

http://code.google.com/survey

Everyone who submits the survey will have a chance to win a limited edition t-shirt.

The survey runs until midnight PST Friday June 11 2010 (that's this week!).

By Jocelyn Becker and the Google Code developer experience team
URL: http://googlecode.blogspot.com/2010/06/help-us-improve-developer-experience-at.html

[Gd] Dev Channel Update

| More

Google Chrome Releases: Dev Channel Update

The Dev channel has been updated to 6.0.427.0 for Windows, Mac and Linux platforms

All
  • No preview is seen in empty form fields when Autofill profile is in focus. (Issue 38582)

Linux
  • Huge alert box crashes Chrome
Truncate very long javascript alert messages. (Issue 45002)
  • Backtrace when dragging lock icon to page
GTK: fix site type icon dragging. (Issue 45270)
Chrome Frame
  • Fixed sad-tab showing briefly during switch to Chrome Frame. Sometimes, navigating to a page in Chrome Frame within IE would display the sad tab page briefly until the navigation was initiated from IE.
Known Issues
  • Chrome crashes when canceling the signing to sync (Issue 45860)
  • Chrome crashes when click "save" button at AutoFill Profiles window (Issue 46041)
  • Theme could not be installed (Issue 46036)


More details about additional changes are available in the svn log of all revisions. 



You can find out about getting on the Dev channel here:Â http://dev.chromium.org/getting-involved/dev-channel.

If you find new issues, please let us know by filing a bug at http://code.google.com/p/chromium/issues/entry

Karen Grunberg
Google Chrome
URL: http://googlechromereleases.blogspot.com/2010/06/dev-channel-update_09.html

Tuesday, June 8, 2010

[Gd] Save the date for Google I/O 2011

| More

Google Code Blog: Save the date for Google I/O 2011

Google I/O just recently came to a close, but it won’t be long before we start gearing up for next year. And we’d like to make sure it’s on your calendars!

May 10-11, 2011
Moscone West, San Francisco


We’ll keep you updated when registration opens for I/O 2011 on the Google Code Blog, Twitter, and Buzz.


Posted by Christine Tsai, Google I/O Team
URL: http://googlecode.blogspot.com/2010/06/save-date-for-google-io-2011.html

[Gd] Our new search index: Caffeine

| More

Official Google Webmaster Central Blog: Our new search index: Caffeine

(Cross-posted on the Official Google Blog)

Today, we're announcing the completion of a new web indexing system called Caffeine. Caffeine provides 50 percent fresher results for web searches than our last index, and it's the largest collection of web content we've offered. Whether it's a news story, a blog or a forum post, you can now find links to relevant content much sooner after it is published than was possible ever before.

Some background for those of you who don't build search engines for a living like us: when you search Google, you're not searching the live web. Instead you're searching Google's index of the web which, like the list in the back of a book, helps you pinpoint exactly the information you need. (Here's a good explanation of how it all works.)

So why did we build a new search indexing system? Content on the web is blossoming. It's growing not just in size and numbers but with the advent of video, images, news and real-time updates, the average webpage is richer and more complex. In addition, people's expectations for search are higher than they used to be. Searchers want to find the latest relevant content and publishers expect to be found the instant they publish.

To keep up with the evolution of the web and to meet rising user expectations, we've built Caffeine. The image below illustrates how our old indexing system worked compared to Caffeine:
Our old index had several layers, some of which were refreshed at a faster rate than others; the main layer updating every couple of weeks. To refresh a layer of the old index, we would analyze the entire web, which meant there was a significant delay between when we found a page and made it available to you.

With Caffeine, we analyze the web in small portions and update our search index on a continuous basis, globally. As we find new pages, or new information on existing pages, we can add these straight to the index. That means you can find fresher information than ever before — no matter when or where it was published.

Caffeine lets us index web pages on an enormous scale. In fact, every second Caffeine processes hundreds of thousands of pages in parallel. If this were a pile of paper it would grow three miles taller every second. Caffeine takes up nearly 100 million gigabytes of storage in one database and adds new information at a rate of hundreds of thousands of gigabytes per day. You would need 625,000 of the largest iPods to store that much information; if these were stacked end-to-end they would go for more than 40 miles.

We've built Caffeine with the future in mind. Not only is it fresher, it's a robust foundation that makes it possible for us to build an even faster and comprehensive search engine that scales with the growth of information online, and delivers even more relevant search results to you. So stay tuned, and look for more improvements in the months to come.

Posted by Carrie Grimes, Software Engineer
URL: http://googlewebmastercentral.blogspot.com/2010/06/our-new-search-index-caffeine.html

[Gd] YouTube API @ Google I/O 2010

| More

YouTube API Blog: YouTube API @ Google I/O 2010

Kuan Yong, Gareth McSorley and I -- representing Product Management, Engineering, and Developer Relations, respectively -- were happy to present a YouTube API session at this year’s Google I/O developer conference. We got the chance to meet many members of the developer community there, but unfortunately not everyone is able to make it out to San Francisco in person. For the benefit of those who could not attend, a recording of our session is now available on YouTube, and embedded below.

The session was titled “YouTube API Uploads: Tips, Tricks, and Best Practices” and we covered all topics related to uploads from A (Android app uploads) to Z (zero-metadata uploads). We hit a few letters in between, too: B (browser-based uploads) I (iPhone app uploads), Q (upload quota questions), R (resumable uploads) and Y (YouTube Direct). There’s something for everyone in this session, so if your application uploads video to YouTube, be sure to check it out!



Cheers,
-Jeff Posnick, YouTube API Team
YouTube is hiring! ~ http://google.com/jobs/workyoutube
URL: http://apiblog.youtube.com/2010/06/youtube-api-google-io-2010.html