Friday, December 2, 2011

[Gd] Fridaygram: indoors, in space, in formation

| More

The official Google Code blog: Fridaygram: indoors, in space, in formation

Author Photo
By Scott Knaster, Google Code Blog Editor

The latest version of Google Maps for Android can take you somewhere new: indoors. The Google Maps folks have plotted out a bunch of airports, shopping centers, stores, and other locations in the U.S. and Japan. Now you have one fewer excuse for staying at home.



From the shops to space: last Saturday NASA launched the Mars Science Laboratory (MSL), including the Curiosity rover. MSL will travel to Mars over the next 8 months before a planned touchdown next August 6th. Once there, Curiosity will conduct experiments to figure out if life was ever feasible in the landing area.

Finally, if you haven’t decided yet what you’re going to do this weekend, maybe you could fly in formation with jets like this dude did. Or you could clean out the fridge.


Fridaygram posts are just for fun. They're designed for your Friday afternoon and weekend enjoyment. Each Fridaygram item must pass only one test: it has to be interesting to us nerds.
URL: http://googlecode.blogspot.com/2011/12/fridaygram-indoors-in-space-in.html

[Gd] Hacking for Humanity around the world

| More

The official Google Code blog: Hacking for Humanity around the world

Author Photo
By Christiaan Adams, Google.org Crisis Response Team

Every year, coders and designers have been gathering to meet with experts in disaster response and international development, to spend a weekend designing tools and hacking code for the public good. This weekend, December 3-4, 2011, the next Random Hacks of Kindness (RHoK) hackathons will be taking place in cities around the world, with the simple idea that technology can and should be used for good.


Led by Google, Microsoft, Yahoo!, Hewlett-Packard, NASA, and the World Bank, RHoK brings together hackers of all stripes to create open source software solutions that address issues of global interest and assist the organizations working on those issues. The fourth round of global RHoK events will be taking place in more than 30 cities on December 3-4, 2011, and you are invited and encouraged to attend.

Some of the interesting solutions that have been developed at past events include I’mOK, a mobile app that was used after the Haiti and Chile earthquakes, CHASM, a visualization tool for mapping landslide risk which is being used by the World Bank around the Caribbean, and Bushfire Connect, an online service for real-time information on fires in Australia. Hackers have also helped develop features for Person Finder, a tool created by the Google.org Crisis Response Team to help people find friends and loved ones after disasters.

We’re inviting all developers, designers, and anyone else who wants to help “hack for humanity” to attend one of the local events this weekend, December 3-4. You’ll have a chance to meet other open source developers, work with experts in disasters and international development, and contribute code to exciting projects that make a difference. Googlers will be attending several events, including those in San Francisco, New York, London, and others. We look forward to meeting you there!

And if you’re part of an organization that works in the fields of crisis response, climate change, or international development, you can submit a problem definition online, so that developers and volunteers can work on technology to address the challenge.

Visit http://www.rhok.org/ for more information and to sign up for your local event, and get set to put your hacking skills to good use.


Christiaan Adams is a developer advocate with the Google Earth Outreach Team and Google.org’s Crisis Response Team, where he helps nonprofits and disaster response organizations to use online mapping tools. When he’s not at work, he likes to go hiking or mountain biking, using Google Maps, of course.

Posted by Scott Knaster, Editor


URL: http://googlecode.blogspot.com/2011/12/hacking-for-humanity-around-world.html

[Gd] Games Coming to Android Market in Korea

| More

Android Developers Blog: Games Coming to Android Market in Korea



[This post is by Eric Chu, Android Developer Ecosystem. —Dirk Dougherty]



In the 24 months since the first Android device became available locally, Korea has quickly become one of the top countries in Android device activations. In parallel, we’ve also seen tremendous growth in app downloads from Android Market. Korea is now the second-largest consumer of apps worldwide. Today we are adding to this momentum by bringing games to Android Market in Korea.



Starting right away, Android users in Korea can explore the many thousands of popular game titles available in Android Market and download them onto their devices. For paid games, purchasing is fast and convenient through direct carrier billing, which lets users in Korea easily charge their purchases to their monthly mobile operator bills.



If you are a game developer, now is the time to localize your game resources, app descriptions, and marketing assets to take advantage of this new opportunity. When you are ready, please visit the Android Market developer console to target your app for distribution in South Korea and set prices in Korean Won (KRW). If you don’t want to distribute to Korea right away, you can also exclude it.



With the huge popularity of games on Android and the convenience of direct carrier billing in Korea, we expect to see a jump in game purchases and downloads in the weeks ahead. For game developers worldwide, it’s “game on” in Korea!
URL: http://android-developers.blogspot.com/2011/11/games-coming-to-android-market-in-korea.html

[Gd] Beta Channel Update

| More

Chrome Releases: Beta Channel Update

The Beta channel has been updated to 16.0.912.59 for Windows, Mac, Linux, and Chrome Frame. 

For an overview of key features in this release check out the Google Chrome Blog.  Interested in switching to the Beta or Stable channels?  You can also take a look at the changelog to see what happened in this release since .41.

Find out how.  If you find a new issue, please let us know by filing a bug.

Anthony Laforge
Google Chrome
URL: http://googlechromereleases.blogspot.com/2011/12/beta-channel-update.html

Thursday, December 1, 2011

[Gd] Image Results for your Custom Search Engine

| More

Google Custom Search: Image Results for your Custom Search Engine

Since the launch of Custom Search in 2006, CSE has powered searches on a broad range of sites on the web. Until now, those CSEs have only returned text-based results, but in some cases images can be a much faster, easier and more visually appealing way to search. For photos-focused sites, image results are a great way to showcase your beautiful photos and help visitors to your site quickly and easily find the photos they want. We also think sites focused on news, celebrities, art and digital production assets will similarly benefit.

Now you can add an image results tab to your CSE to offer your visitors image-only results in a variety of image-optimized presentation formats. Once you enable this feature, your CSE will have two tabs. The first has your current web search results and the second, Image tab, contains the image search results. Here’s an example from India-Forums.com:


Enabling image results is easy! Just visit the Basics page of your CSE’s Control Panel and check the Enable image search checkbox. You can change the layout of your image results on the Look and feel page.


Once enabled, you’ll also be able to get separate image search reports from your CSE’s Statistics page.

This new feature is available to all users of our Custom Search Element (you will need to Get Code and update your site). Since we are transitioning all iframe users to the Element, this should be most sites. To learn more about Image Search for Custom Search, please visit our help center. Let us know what you think in our discussion forum.

Posted by: Peng Zhao, Software Engineer
URL: http://googlecustomsearch.blogspot.com/2011/12/image-results-for-your-custom-search.html

[Gd] Scaling with the Kindle Fire

| More

Google App Engine Blog: Scaling with the Kindle Fire

Today’s blog post comes to us from Greg Bayer of Pulse, a popular news reading application for iPhone, iPad and Android devices. Pulse has used Google App Engine as a core part of their infrastructure for over a year and they recently celebrated a significant launch. We hope you find their experiences and tips on scaling useful.






As part of the much anticipated Kindle Fire launch, Pulse was announced as one of the only preloaded apps. When you first un-box the Fire, Pulse will be there waiting for you on the home row, next to Facebook and IMDB!

Scale
The Kindle Fire is projected to sell over five million units this quarter alone. This means that those of us who work on backend infrastructure at Pulse have had to prepare for nearly doubling our user-base in a very short period. We also need to be ready for spikes in load due to press events and the holiday season.

Architecture
As I’ve discussed previously on the Pulse Engineering Blog, Pulse’s infrastructure has been designed with scalability in mind from the beginning. We’ve built our web site and client APIs on top of Google App Engine, which has allowed us to grow steadily from 10s to many 1000s of requests per second, without needing to re-architect our systems.

While restrictive in some ways, we’ve found App Engine’s frontend serving instances (running Python in our case) to be extremely scalable, with minimal operational support from our team. We’ve also found the datastore, memcache, and task queue facilities to be equally scalable.

Pulse’s backend infrastructure provides many critical services to our native applications and web site. For example, we cache and serve optimized feed and image data for each source in our catalog. This allows us to minimize latency and data transfer and is especially important to providing an exceptional user experience on limited mobile connections. Providing this service for millions of users requires us to serve 100Ms of requests per day. As with any well designed App Engine app, the vast majority of these requests are served out of memcache and never hit the datastore. Another useful technique we use is to set public cache control headers wherever possible, to allow Google’s edge cache (shown as cached requests on the graph below) and ISP / mobile carrier caches to serve unchanged content directly to users.





Costs
Based on App Engine’s projected billing statements leading up to the recent pricing changes, we were concerned that our costs might increase significantly. To prepare for these changes and the expected additional load from Kindle Fire users, we invested some time in diagnosing and reducing these costs. In most cases, the increases turned out to be an indicator of inefficiencies in our code and/or in the App Engine scheduler. With a little optimization, we have reduced these costs dramatically.

The new tuning sliders for the scheduler make it possible to rein in overly aggressive instance allocation. In the old pricing structure, idle instance time wasn’t charged for at all, so these inefficiencies were usually ignored. Now App Engine charges for all instance time by default. However, any time App Engine runs more idle instances than you’ve allowed, those hours are free. This acts as a hint to the scheduler, helping it reduce unneeded idle instances. By doing some testing to find the optimal cost vs spike latency tolerance and setting the sliders to those levels, we were able to reduce our frontend instance costs to near original levels. Our heavy usage of memcache (which is still free!) also helps keep our instance hours down.





Since datastore operations used to be charged under the umbrella of CPU hours, it was difficult to know the cost of these operations under the old pricing structure. This meant it was easy to miss application inefficiencies, especially for write-heavy workloads where additional indexes can have a multiplicative effect on costs. In our case, the new datastore write operations metric led us to notice some inefficiencies in our design and a tendency to overuse indexes. We are now working to minimize the number of indexes our queries rely on, and this has started to reduce our write costs.

Preparing for the Kindle Fire Launch
We took a few additional steps to prepare for the expected load increase and spikes associated with the Fire’s launch. First, we contacted App Engine’s support team to warn them of the expected increase. This is recommended for any app at or near 10,000 requests per second (to make sure your application is correctly provisioned). We also signed up for a Premier account which gets us additional support and simpler billing.

Architecturally, we decided to split our load across three primary applications, each serving different use cases. While this makes it harder to access data across these applications, those same boundaries serve to isolate potential load-related problems and make tuning simpler. In our case, we were able to divide certain parts of our infrastructure, where cross application data access was less important and load would be significant. Until App Engine provides more visibility into and control of memcache eviction policies, this approach also helps prevent lower priority data from evicting critical data.

I’m hopeful that in the near future such division of services will not be required. Individually tunable load isolation zones and memcache controls would certainly make it a lot more appealing to have everything in a single application. Until then, this technique works quite well, and helps to simplify how we think about scaling.

To learn more about Pulse, check out our website! If you have comments or questions about this post or just want to reach out directly, you can find me @gregbayer.
URL: http://googleappengine.blogspot.com/2011/11/scaling-with-kindle-fire.html

Wednesday, November 30, 2011

[Gd] JavaScript Client Library for Google APIs Alpha version released

| More

The official Google Code blog: JavaScript Client Library for Google APIs Alpha version released

author photo
Brendan
author photo
Antonio
By Brendan O’Brien and Antonio Fuentes, Google Developer Team

Today we reached another milestone in our efforts to provide infrastructure and tools to make it easier for developers to use Google APIs: we have released the Google APIs Client Library for JavaScript in Alpha. This client library is the latest addition to our suite of client libraries, which already includes Python, PHP, and Java.

This compact and efficient client library provides access to all the Google APIs that are listed in the APIs Explorer. The client library is also flexible, supporting multiple browser environments including Chrome 8+, Firefox 3.5+, Internet Explorer 8+, Safari 4+, and Opera 11+. In addition, the JavaScript client library supports OAuth 2.0 authorization methods.

You can load the client library using the following script tag:

<script src="https://apis.google.com/js/client.js?onload=CALLBACK"></script>

Loading an API and making a request is as easy as executing:

gapi.client.load('API_NAME', 'API_VERSION', CALLBACK);

// Returns a request object which can then be executed.
// METHOD_NAME is only available once CALLBACK runs.

var request = gapi.client.METHOD_NAME(PARAMETERS_OBJECT);
request
.execute(callback);

You can use the APIs Explorer to check all the methods available for an API, as well as the parameters for each method. For instance, use the above syntax with the plus.activities.search method of the Google+ API to query activities:


<!DOCTYPE html>
<html>
 <head>
 </head>

 <body>
   <script type="text/javascript">

function init() {

 // Load your API key from the Developer Console
 gapi.client.setApiKey('YOUR_API_KEY');

 // Load the API
 gapi.client.load('plus', 'v1', function() {
     var request = gapi.client.plus.activities.search({
         'query': 'Google+',
           'orderby': 'best'
           });

     request.execute(function(resp) {
         // Output title
         var heading = document.createElement('h4');
         heading.appendChild(document.createTextNode(
resp.title));
         var content = document.getElementById('content');
         content.appendChild(heading);

         // Output content of the response
         if (!resp.items) {
           content.appendChild(document.createTextNode(
'No results found.'));
         } else {
           for (var i = 0; i < resp.items.length; i++) {
             var entry = document.createElement('p');
           entry.appendChild(document.createTextNode(
resp.items[i].title));
             content.appendChild(entry);
           }
         }
       });
   });
}
   </script>
   <script src="https://apis.google.com/js/client.js?onload=init"></script>

   <div id="content"></div>
 </body>
</html>

To try this yourself, sign up in the Google APIs console or refer to the documentation on acquiring and using a developer key in the Google+ API.

The Google APIs Client Library for JavaScript is currently in Alpha, which means that we are actively developing it, but wanted to get the library in your hands as soon as possible, and we welcome any feedback to make the code better. While you can use the current library to start writing code, you should use caution when writing production code as library code changes may break your application. We are working hard to upgrade this release to Beta and beyond soon, and to release even more client libraries.

To get started, visit the JavaScript Client Library documentation page. We also welcome your feedback, which you can provide using the JavaScript client group.


Brendan O'Brien is a Software Engineer for the Browser Client group at Google. Prior to working on JavaScript APIs he was a frontend engineer for iGoogle. He is passionate about JavaScript and enjoys building web applications.

Antonio Fuentes is a Product Manager for the Google API Infrastructure group. He has experience launching products in the cloud computing, infrastructure, and virtualization space.

Posted by Scott Knaster, Editor
URL: http://googlecode.blogspot.com/2011/11/javascript-client-library-for-google.html

[Gd] Introducing Au-to-do, a sample application built on Google APIs

| More

The official Google Code blog: Introducing Au-to-do, a sample application built on Google APIs

Author Photo
By Dan Holevoet, Developer Relations Team

A platform is more than the sum of its component parts. You can read about it or hear about it, but to really learn what makes up a platform you have to try it out for yourself, play with the parts, and discover what you can build.

With that in mind, we started a project called Au-to-do: a full sample application implementing a ticket tracker, built using Google APIs, that developers can download and dissect.

Au-to-do screen shot

Au-to-do currently uses the following APIs and technologies:
Additional integrations with Google APIs are on their way. We are also planning a series of follow-up blog posts discussing each of the integrations in depth, with details on our design decisions and best practices you can use in your own projects.

By the way, if you’re wondering how to pronounce Au-to-do, you can say "auto-do" or "ought-to-do" — either is correct.

Ready to take a look at the code? Check out the getting started guide. Found a bug? Have a great idea for a feature or API integration? Let us know by filing a request.

Happy hacking!


Dan Holevoet joined the Google Developer Relations team in 2007. When not playing Starcraft, he works on Google Apps, with a focus on the Calendar and Contacts APIs. He's previously worked on iGoogle, OpenSocial, Gmail contextual gadgets, and the Google Apps Marketplace.

Posted by Scott Knaster, Editor



URL: http://googlecode.blogspot.com/2011/11/introducing-au-to-do-sample-application.html

Tuesday, November 29, 2011

[Gd] OpenGL ES 2.0 Certification for ANGLE

| More

Chromium Blog: OpenGL ES 2.0 Certification for ANGLE

In March of last year we introduced ANGLE as the engine that would power Chrome's GPU rendering on Windows. At the time it was announced, ANGLE only supported a subset of the OpenGL ES 2.0 API. Thanks to continued work from Transgaming, in collaboration with Google engineers and other contributors, ANGLE has reached an important milestone: It now passes the rigorous OpenGL ES 2.0 test suite and has been certified as a compliant GL ES 2.0 implementation. This is a major step forward for the project, and a major event for OpenGL ES support on Windows.

Mac and Linux already enjoy solid OpenGL support, but on Windows OpenGL drivers are not sufficiently widespread to be relied upon. Using ANGLE allows us to issue OpenGL ES commands in Chrome's graphics systems and not worry about the user's computer having OpenGL drivers -- ANGLE translates these commands into Direct3D 9 API calls.

ANGLE helps Chrome use a single, open graphics standard and remain portable across platforms. Because it's a standalone library, open-source project ANGLE can help other software projects in the same way. Firefox, for instance, is already using ANGLE to render WebGL content on Windows.

ANGLE is a necessary step in our continued efforts to push the web platform forward. Without ANGLE, it would be impossible to reliably run WebGL on many Windows computers, so we couldn't enable great applications like MapsGL. We hope WebGL developers and implementors will continue to join us in making ANGLE, and the open web platform, successful.

Posted by Vangelis Kokkevis, Software Engineer
URL: http://blog.chromium.org/2011/11/opengl-es-20-certification-for-angle.html

Monday, November 28, 2011

[Gd] Simplifying Access Control in Google Cloud Storage

| More

The official Google Code blog: Simplifying Access Control in Google Cloud Storage

Author Photo
By Navneet Joneja, Product Manager

Google Cloud Storage is a robust, high-performance service that enables developers and businesses to use Google’s infrastructure to power their data. Today, we’re announcing a new feature that makes it even easier to control and share your data.

Per-Bucket Default Object ACLs

Customers building a wide variety of applications have asked us for an easier mechanism to control the permissions granted on newly created objects. Now you can define your access control policy for a bucket once by specifying a Default Object ACL for any bucket, and we’ll automatically apply that ACL to any object without an explicitly defined ACL. You can always override the default by providing a canned ACL when you upload the object or by updating the object’s ACL afterwards. This mechanism simplifies wide variety of use cases, including data sharing, controlled-access data sets and corporate drop-boxes.

New buckets without Default ACLs

After analyzing how customers use our service, we’ve also decided to make a few small changes to the behavior of buckets that have no explicit default object ACL. Effective today, new buckets are created with an implied project-private default object ACL. In other words, project editors and owners will have FULL_CONTROL access to new objects, and project viewers will have READ access to them. This change better aligns the default behavior with how our customers use storage. You can change a bucket’s default object ACL at any time after creating the bucket.

Existing buckets have an effective default object ACL of "private", and they will continue to work as they always have until and unless you specify a new default object ACL for them.


Navneet Joneja loves being at the forefront of the next generation of simple and reliable software infrastructure, the foundation on which next-generation technology is being built. When not working, he can usually be found dreaming up new ways to entertain his intensely curious one-year-old.

Posted by Scott Knaster, Editor
URL: http://googlecode.blogspot.com/2011/11/simplifying-access-control-in-google.html

[Gd] Google I/O 2012 extended to three days from June 27-29, 2012

| More

The official Google Code blog: Google I/O 2012 extended to three days from June 27-29, 2012

Author Photo
By Monica Tran, Google I/O Team

After Google I/O 2011, you consistently told us you wanted more time to attend sessions, visit our partners in the Developer Sandbox, and meet 1:1 with the engineers behind Google’s developer platforms and APIs. We recently received an unexpected opportunity to extend Google I/O to three days, so as we announced on our +Google Developers page, we are moving the conference to June 27-29, 2012. It will still take place at Moscone Center West in San Francisco.

Google I/O 2012
June 27-29, 2012
Moscone Center West, San Francisco


In the meantime, be sure to brush up on your coding skills. They’ll come in handy when the new application process opens in February. That’s all we can tell you for now, but we’d advise against making travel arrangements until then. Continue following us at our Google Developers page on Google+ to be the first to get #io12 updates!


This post supersedes our previous Save the Date announcement. Please update your calendars: Google I/O will be coming to Moscone Center in San Francisco on June 27-29. We will be responding to FAQs via our thread on Google+.


You might remember Monica Tran from I/O Live or one of our eight Google Developer Days around the world. This year, she’s back to lead the charge on Google I/O 2012.

Posted by Scott Knaster, Editor

URL: http://googlecode.blogspot.com/2011/11/google-io-2012-extended-to-three-days.html

Sunday, November 27, 2011

[Gd] Parsing exported mailboxes using Python

| More

Google Apps Developer Blog: Parsing exported mailboxes using Python

Google Apps domain administrators can use the Email Audit API to download mailbox accounts for audit purposes in accordance with the Customer Agreement. To improve the security of the data retrieved, the service creates a PGP-encrypted copy of the mailbox which can only be decrypted by providing the corresponding RSA key.

When decrypted, the exported mailbox will be in mbox format, a standard file format used to represent collections of email messages. The mbox format is supported by many email clients, including Mozilla Thunderbird and Eudora.

If you don’t want to install a specific email client to check the content of exported mailboxes, or if you are interested in automating this process and integrating it with your business logic, you can also programmatically access mbox files.

You could fairly easily write a parser for the simple, text-based mbox format. However, some programming languages have native mbox support or libraries which provide a higher-level interface. For example, Python has a module called mailbox that exposes such functionality, and parsing a mailbox with it only takes a few lines of code:

import mailbox

def print_payload(message):
# if the message is multipart, its payload is a list of messages
if message.is_multipart():
for part in message.get_payload():
print_payload(part)
else:
print message.get_payload(decode=True)

mbox = mailbox.mbox('export.mbox')
for message in mbox:
print message['subject']
print_payload(message)

Let me know your favorite way to parse mbox-formatted files by commenting on Google+.

For any questions related to the Email Audit API, please get in touch with us on the Google Apps Domain Info and Management APIs forum.

Claudio Cherubino   profile | twitter | blog

Claudio is a Developer Programs Engineer working on Google Apps APIs and the Google Apps Marketplace. Prior to Google, he worked as software developer, technology evangelist, community manager, consultant, technical translator and has contributed to many open-source projects, including MySQL, PHP, Wordpress, Songbird and Project Voldemort.

URL: http://googleappsdeveloper.blogspot.com/2011/11/parsing-exported-mailboxes-using-python.html