Saturday, June 15, 2013

[Gd] Play Cube Slam, a real-time WebRTC video game

| More

Google Developers Blog: Play Cube Slam, a real-time WebRTC video game

Author PictureBy Justin Uberti, Chromium team

Cross-posted with the Chromium Blog

Cube Slam is a Chrome Experiment built with WebRTC, an open web technology that lets you communicate in real-time in the browser (and in this case, play an old-school arcade game with your friends) without downloading and installing any plug-ins. In this post, we wanted to explain a bit about how Cube Slam works.

Cube Slam uses getUserMedia to access your webcam and microphone (with your permission, of course), RTCPeerConnection to stream your video to a friend, and RTCDataChannel to transfer the bits that keep the gameplay in sync. If you and your friend are behind firewalls, RTCPeerConnection uses a TURN relay server (hosted on Google Compute Engine) to make the connection. When there are no firewalls in the way, however, the entire game happens directly peer-to-peer, reducing latency for players and server costs for developers.


CubeSlame Game Over screen

Cube Slam is the first large-scale application to use RTCDataChannel, which provides an API similar to WebSocket, but sends the data over the RTCPeerConnection peer-to-peer link. RTCDataChannel sends data securely, and supports an "unreliable" mode for cases where you want high performance but don't care about every single packet making it across the network. In cases like games where low delay often matters more than perfect delivery, this ensures that a single stray packet doesn't slow down the whole app.

RTCDataChannel only supports unreliable mode in desktop Chrome today. We're working on implementing the latest WebRTC spec, where we'll use the standard SCTP protocol to support reliable mode. WebRTC will also be available on Chrome for Android later this year, and you can try it now by flipping “Enable WebRTC Android” in chrome://flags. Several browsers are currently working on implementing WebRTC, and we’re looking forward to the day when you can have a Cube Slam face-off against your friends on any browser and any device.

To learn more about the tech in Cube Slam, you can check out our technology page and source code. Disable the shields! Destroy the screen! Have fun!

Justin Uberti is one of the co-creators of the WebRTC initiative, and leads the WebRTC engineering team at Google. Previously, Justin helped create Google+ Hangouts.

Posted by Ashleigh Rentz, Editor Emerita

URL: http://googledevelopers.blogspot.com/2013/06/play-cube-slam-real-time-webrtc-video.html

[Gd] Race across screens and platforms, powered by the mobile web

| More

Google Developers Blog: Race across screens and platforms, powered by the mobile web

Author PictureBy Pete LePage, Chromium team

Cross-posted with the Chromium Blog

You may have seen our recent demo of Racer at Google I/O, and wondered how it was made. So today we wanted to share some of the web technologies that made this Chrome Experiment “street-legal” in a couple of months. Racer was built to show what’s possible on today’s mobile devices using an entirely in-browser experience. The goal was to create a touch-enabled experience that plays out across multiple screens (and speakers). Because it was built for the web, it doesn’t matter if you have a phone or a tablet running Android or iOS, everyone can jump right in and play.
   
Racer required two things: speedy pings and a consistent experience across screens. We delivered our minimal 2D vector drawings to each device’s HTML5 Canvas using the Paper.js vector library. Paper.js can handle the path math for our custom race track shapes without getting lapped. To eke out all the frame rate possible on such a large array of devices we rendered the cars as image sprites on a DOM layer above the Canvas, while positioning and rotating them using CSS3’s transform: matrix().

Racer’s sound experience is shared across multiple devices using the Web Audio API (available in latest iOS and Android M28 beta). Each device plays one slice of Giorgio Moroder’s symphony of sound—requiring five devices at once to hear his full composition. A constant ping from the server helps synchronize all device speakers allowing them to bump to one solid beat. Not only is the soundtrack divided across devices, it also reacts to each driver’s movements in real time—the accelerating, coasting, careening, and colliding rebalances the presence of every instrument.

To sync your phones or tablets, we use WebSockets, which enables rapid two-way communication between devices via a central server. WebSocket technology is just the start of what multi-device apps of the future might use. We’re looking forward to when WebRTC data channels—the next generation of speedy Web communication—is supported in the stable channel of Chrome for mobile. Then we’ll be able to deliver experiences like Racer with even lower ping times and without bouncing messages via a central server. Racer’s backend was built on the Google Cloud Platform using the same structure and web tools as another recent Chrome Experiment, Roll It.

To get an even more detailed peek under the hood, we just published two new case studies on our HTML5 Rocks site. Our friends at Plan8 wrote one about the sound engineering and Active Theory wrote one about the front-end build. You can also join the team at Active Theory for a Google Developers Live event this Thursday, June 13, 2013 at 3pm GMT for an in depth discussion.

Pete LePage is a Developer Advocate on the Google Chrome team and helps developers create great web applications and mobile web experiences.

Posted by Ashleigh Rentz, Editor Emerita
URL: http://googledevelopers.blogspot.com/2013/06/race-across-screens-and-platforms.html

[Gd] Google BigQuery new features: bigger, faster, smarter

| More

Google Developers Blog: Google BigQuery new features: bigger, faster, smarter

Author PictureBy Felipe Hoffa, Cloud Platform team

Google BigQuery is designed to make it easy to analyze large amounts of data quickly. Today we announced several updates that give BigQuery the ability to handle arbitrarily large result sets, use window functions for advanced analytics, and cache query results. You are also getting new UI features, larger interactive quotas, and a new convenient tiered pricing scheme. In this post we'll dig further into the technical details of these new features.

Large results

BigQuery is able to process terabytes of data, but until today BigQuery could only output up to 128 MB of compressed data per query. Many of you asked for more and from now on BigQuery will be able to output results as large as the largest tables our customers have ever had.

To get this benefit, you should enable the new "--allow_large_results" flag when issuing a query job, and specify a destination table. All results will be saved to the new specified table (or appended, if the table exists). In the updated web UI these options can be found under the new "Enable Options" menu.

With this feature, you can run big transformations on your tables, plus get big subsets of data to further analyze from the new table.

Analytic functions

BigQuery's power is in the ability to interactively run aggregate queries over terabytes of data, but sometimes counts and averages are not enough. That's why BigQuery also lets you calculate quantiles, variance and standard deviation, as well as other advanced functions.

To make BigQuery even more powerful, today we are adding support for window functions (also known as "analytical functions") for ranking, percentiles, and relative row navigation. These new functions give you different ways to rank results, explore distributions and percentiles, and traverse results without the need for a self join.

To introduce these functions with an advanced example, let's use the dataset we collected from the Data Sensing Lab at Google I/O. With the percentile_cont() function it's easy to get the median temperature over each room:


SELECT percentile_cont(0.5) OVER (PARTITION BY room ORDER BY data) AS median, room
FROM [io_sensor_data.moscone_io13]
WHERE sensortype='temperature'

In this example, each original data row shows the median temperature for each room. To visualize it better, it's a good idea to group all results by room with an outer query:


SELECT MAX(median) AS median, room FROM (
SELECT percentile_cont(0.5) OVER (PARTITION BY room ORDER BY data) AS median, room
FROM [io_sensor_data.moscone_io13]
WHERE sensortype='temperature'
)
GROUP BY room

We can add an additional outer query, to rank the rooms according to which one had the coldest median temperature. We'll use one of the new ranking window functions, dense_rank():


SELECT DENSE_RANK() OVER (ORDER BY median) rank, median, room FROM (
SELECT MAX(median) AS median, room FROM (
SELECT percentile_cont(0.5) OVER (PARTITION BY room ORDER BY data) AS median, room
FROM [io_sensor_data.moscone_io13]
WHERE sensortype='temperature'
)
GROUP BY room
)

We've updated the documentation with descriptions and examples for each of the new window functions. Note that they require the OVER() clause, with an optional PARTITION BY and sometimes required ORDER BY arguments. ORDER BY tells the window function what criteria to use to rank items, while PARTITION BY allows you to define multiple groups to be analyzed independently of each other.

The window functions don't work with the big GROUP EACH BY and JOIN EACH BY operators, but they do work with the traditional GROUP BY and JOIN BY. As a reminder, we announced GROUP EACH BY and JOIN EACH BY last March, to allow large join and group operations.

Query caching

BigQuery now remembers values that you've previously computed, saving you time and the cost of recalculating the query. To maintain privacy, queries are cached on a per-user basis. Cached results are only returned for tables that haven't changed since the last query, or for queries that are not dependent on non-deterministic parameters (such as the current time). Reading cached results is free, but each query still counts against the max number of queries per day quota. Query results are kept cached for 24 hours, on a best effort basis. You can disable query caching with the new flag --use_cache in bq, or "useQueryCache" in the API. This feature is also accessible with the new query options on the BigQuery Web UI.

BigQuery Web UI: Query validator, cost estimator, and abandonment

The BigQuery UI gets even better: You'll get instant information while writing a query if its syntax is valid. If the syntax is not valid, you'll know where the error is. If the syntax is valid, the UI will inform you how much the query would cost to run. This feature is also available with the bq tool and API, using the --dry_run flag.

An additional improvement: When running queries on the UI, previously you had to wait until its completion before starting another one. Now you have the option to abandon it, to start working on the next iteration of the query without waiting for the abandoned one.

Pricing updates

Starting in July, BigQuery pricing becomes more affordable for everyone: Data storage costs are going from $0.12/GB/month to $0.08/GB/month. And if you are a high-volume user, you'll soon be able to opt-in for tiered query pricing, for even better value.

Bigger quota

To support larger workloads we're doubling interactive query quotas for all users, from 200GB + 1 concurrent query, to 400 GB of concurrent queries + 2 additional queries of unlimited size.

These updates make BigQuery a faster, smarter, and even more affordable solution for ad hoc analysis of extremely large datasets. We expect they'll help to scale your projects, and we hope you'll share your use cases with us on Google+.


The BigQuery UI features a collection of public datasets for you to use when trying out these new features. To get started, visit our sign-up page and Quick Start guide. You should take a look at our API docs, and ask questions about BigQuery development on Stack Overflow. Finally, don't forget to give us feedback and join the discussion on our Cloud Platform Developers Google+ page.



Felipe Hoffa has recently joined the Cloud Platform team. He'd love to see the world's data accessible for everyone in BigQuery.

Posted by Ashleigh Rentz, Editor Emerita
URL: http://googledevelopers.blogspot.com/2013/06/google-bigquery-new-features-bigger.html

[Gd] Optimal Logging

| More

Google Testing Blog: Optimal Logging

by Anthony Vallone

How long does it take to find the root cause of a failure in your system? Five minutes? Five days? If you answered close to five minutes, it’s very likely that your production system and tests have great logging. All too often, seemingly unessential features like logging, exception handling, and (dare I say it) testing are an implementation afterthought. Like exception handling and testing, you really need to have a strategy for logging in both your systems and your tests. Never underestimate the power of logging. With optimal logging, you can even eliminate the necessity for debuggers. Below are some guidelines that have been useful to me over the years.


Channeling Goldilocks

Never log too much. Massive, disk-quota burning logs are a clear indicator that little thought was put in to logging. If you log too much, you’ll need to devise complex approaches to minimize disk access, maintain log history, archive large quantities of data, and query these large sets of data. More importantly, you’ll make it very difficult to find valuable information in all the chatter.

The only thing worse than logging too much is logging too little. There are normally two main goals of logging: help with bug investigation and event confirmation. If your log can’t explain the cause of a bug or whether a certain transaction took place, you are logging too little.

Good things to log:
  • Important startup configuration
  • Errors
  • Warnings
  • Changes to persistent data
  • Requests and responses between major system components
  • Significant state changes
  • User interactions
  • Calls with a known risk of failure
  • Waits on conditions that could take measurable time to satisfy
  • Periodic progress during long-running tasks
  • Significant branch points of logic and conditions that led to the branch
  • Summaries of processing steps or events from high level functions - Avoid logging every step of a complex process in low-level functions.

Bad things to log:
  • Function entry - Don’t log a function entry unless it is significant or logged at the debug level.
  • Data within a loop - Avoid logging from many iterations of a loop. It is OK to log from iterations of small loops or to log periodically from large loops.
  • Content of large messages or files - Truncate or summarize the data in some way that will be useful to debugging.
  • Benign errors - Errors that are not really errors can confuse the log reader. This sometimes happens when exception handling is part of successful execution flow.
  • Repetitive errors - Do not repetitively log the same or similar error. This can quickly fill a log and hide the actual cause. Frequency of error types is best handled by monitoring. Logs only need to capture detail for some of those errors.


There is More Than One Level

Don't log everything at the same log level. Most logging libraries offer several log levels, and you can enable certain levels at system startup. This provides a convenient control for log verbosity.

The classic levels are:
  • Debug - verbose and only useful while developing and/or debugging.
  • Info - the most popular level.
  • Warning - strange or unexpected states that are acceptable.
  • Error - something went wrong, but the process can recover.
  • Critical - the process cannot recover, and it will shutdown or restart.

Practically speaking, only two log configurations are needed:
  • Production - Every level is enabled except debug. If something goes wrong in production, the logs should reveal the cause.
  • Development & Debug - While developing new code or trying to reproduce a production issue, enable all levels.


Test Logs Are Important Too

Log quality is equally important in test and production code. When a test fails, the log should clearly show whether the failure was a problem with the test or production system. If it doesn't, then test logging is broken.

Test logs should always contain:
  • Test execution environment
  • Initial state
  • Setup steps
  • Test case steps
  • Interactions with the system
  • Expected results
  • Actual results
  • Teardown steps


Conditional Verbosity With Temporary Log Queues

When errors occur, the log should contain a lot of detail. Unfortunately, detail that led to an error is often unavailable once the error is encountered. Also, if you’ve followed advice about not logging too much, your log records prior to the error record may not provide adequate detail. A good way to solve this problem is to create temporary, in-memory log queues. Throughout processing of a transaction, append verbose details about each step to the queue. If the transaction completes successfully, discard the queue and log a summary. If an error is encountered, log the content of the entire queue and the error. This technique is especially useful for test logging of system interactions.


Failures and Flakiness Are Opportunities

When production problems occur, you’ll obviously be focused on finding and correcting the problem, but you should also think about the logs. If you have a hard time determining the cause of an error, it's a great opportunity to improve your logging. Before fixing the problem, fix your logging so that the logs clearly show the cause. If this problem ever happens again, it’ll be much easier to identify.

If you cannot reproduce the problem, or you have a flaky test, enhance the logs so that the problem can be tracked down when it happens again.

Using failures to improve logging should be used throughout the development process. While writing new code, try to refrain from using debuggers and only use the logs. Do the logs describe what is going on? If not, the logging is insufficient.


Might As Well Log Performance Data

Logged timing data can help debug performance issues. For example, it can be very difficult to determine the cause of a timeout in a large system, unless you can trace the time spent on every significant processing step. This can be easily accomplished by logging the start and finish times of calls that can take measurable time:
  • Significant system calls
  • Network requests
  • CPU intensive operations
  • Connected device interactions
  • Transactions


Following the Trail Through Many Threads and Processes

You should create unique identifiers for transactions that involve processing across many threads and/or processes. The initiator of the transaction should create the ID, and it should be passed to every component that performs work for the transaction. This ID should be logged by each component when logging information about the transaction. This makes it much easier to trace a specific transaction when many transactions are being processed concurrently.


Monitoring and Logging Complement Each Other

A production service should have both logging and monitoring. Monitoring provides a real-time statistical summary of the system state. It can alert you if a percentage of certain request types are failing, it is experiencing unusual traffic patterns, performance is degrading, or other anomalies occur. In some cases, this information alone will clue you to the cause of a problem. However, in most cases, a monitoring alert is simply a trigger for you to start an investigation. Monitoring shows the symptoms of problems. Logs provide details and state on individual transactions, so you can fully understand the cause of problems.

URL: http://googletesting.blogspot.com/2013/06/optimal-logging.html

[Gd] From a need to a startup, thanks to a Google hackathon

| More

Google Apps Developer Blog: From a need to a startup, thanks to a Google hackathon

Editor’s Note: Guest author Arnaud Breton is a co-founder of UniShared — Nicolas Garnier

A few weeks ago, Clément and I started to take online courses (based on videos), mainly on Coursera, edX (the MIT-Harvard venture), and Udacity, the biggest online course providers.

Right from the start, we found that it was really painful to take digital notes while watching the video lectures: we had to switch between multiple windows, struggling to interact with them (no shortcuts to play/pause, etc) and no ways of matching the notes to videos... Finally we ended up taking notes with the old fashioned and archaic pens & papers.

After giving it a thought, we were convinced that we could make this much better. We both live in Mountain View, fifteen minutes away from the Google HQ, where a Google Drive hackathon was taking place so we decided to take this opportunity to find and build a solution to our problem. And we came up with the idea of a simple Google Drive app, which would embed both the video lectures and the notes, in the same window, letting us leverage all the shortcuts. Additionally, it would enable us to synchronize the notes with the videos, to be able to smoothly go to the related part of the videos while studying our notes. During the two days - and two nights - we developed that missing app, helped by the amazing Googlers from the Drive team and meeting other amazing people who came to also give birth to their ideas or to help us.

As a tech guy, it was a new exciting challenge to build this app, especially in such a short time. Fortunately, leveraging Google robust and powerful infrastructure and frameworks made it possible. It seemed obvious for us to leverage the Google Drive and Youtube APIs, to host our app on Google App Engine and to build it with the amazing AngularJS.

At the end of the hackathon, we managed to get a viable app, enabling students to create, save and synchronize their notes and videos. We were even rewarded by the Drive team who gave us a prize for our efforts!

VideoNot.es was born!


A few hours later, we started to spread the word about our new app and were convinced that it could be as useful for other students as it was for us. Immediately, hundreds of students started to use it, giving us useful feedback on what mattered the most to them. Based on their feedback, we continued to improve our app, with the number of users increasing. Today, one month after the hackathon, more than 300 VideoNotes are created daily all around the world, and the number continues to grow day after day!


So, what’s next for us?
Obviously, continuing to improve the app based on feedback is our top priority.

Also, we have discovered new use-cases where VideoNot.es can be useful. For example journalists contacted us to tell us that they use it to write articles based on interview records. But also people in charge of the transcription of videos are also very excited about VideoNot.es.


It got us more convinced than ever that note-taking can be drastically improved, especially by adapting it to its learning context. We want to go even further. Bringing back social interactions in online classrooms, letting students easily share their notes, again leveraging Google Drive was the first step. Adding real-time collaboration and Q&A features is the second step and we will definitely take advantage of the Google Drive Realtime API for this.

Thanks a lot to the Drive team who organized this hackathon to give our idea a try through their amazing tools! And this is just the beginning of the adventure, that started from a problem we faced, whose solution has been found at the Google hackathon, and which is just starting to reach its potential.

Arnaud Breton profile

Arnaud Breton is passionate about the impact that technologies have on the world.  He is the co-founder and CTO of UniShared / VideoNot.es.

URL: http://googleappsdeveloper.blogspot.com/2013/06/from-need-to-startup-thanks-to-google.html

[Gd] Changes in rankings of smartphone search results

| More

Official Google Webmaster Central Blog: Changes in rankings of smartphone search results

Webmaster level: Intermediate

Smartphone users are a significant and fast growing segment of Internet users, and at Google we want them to experience the full richness of the web. As part of our efforts to improve the mobile web, we published our recommendations and the most common configuration mistakes.

Avoiding these mistakes helps your smartphone users engage with your site fully and helps searchers find what they're looking for faster. To improve the search experience for smartphone users and address their pain points, we plan to roll out several ranking changes in the near future that address sites that are misconfigured for smartphone users.

Let's now look at two of the most common mistakes and how to fix them.

Faulty redirects

Some websites use separate URLs to serve desktop and smartphone users. A faulty redirect is when a desktop page redirects smartphone users to an irrelevant page on the smartphone-optimized website. A typical example is when all pages on the desktop site redirect smartphone users to the homepage of the smartphone-optimized site. For example, in the figure below, the redirects shown as red arrows are considered faulty:

This kind of redirect disrupts a user's workflow and may lead them to stop using the site and go elsewhere. Even if the user doesn't abandon the site, irrelevant redirects add more work for them to handle, which is particularly troublesome when they're on slow mobile networks. These faulty redirects frustrate users whether they're looking for a webpage, video, or something else, and our ranking changes will affect many types of searches.

Avoiding irrelevant redirects is very easy: Simply redirect smartphone users from a desktop page to its equivalent smartphone-optimized page. If the content doesn't exist in a smartphone-friendly format, showing the desktop content is better than redirecting to an irrelevant page.

We have more tips about redirects, and be sure to read our recommendations for having separate URLs for desktop and smartphone users.

Smartphone-only errors

Some sites serve content to desktop users accessing a URL but show an error page to smartphone users. There are many scenarios where smartphone-only errors are seen. Some common ones are:

  • If you recognize a user is visiting a desktop page from a mobile device and you have an equivalent smartphone-friendly page at a different URL, redirect them to that URL instead of serving a 404 or a soft 404 page.

  • Make sure that the smartphone-friendly page itself is not an error page. If your content is not available in a smartphone-friendly format, serve the desktop page instead. Showing the content the user was looking for is a much better experience than showing an error page.

  • Incorrectly handling Googlebot-Mobile. A typical mistake is when Googlebot-Mobile for smartphones is incorrectly redirected to the website optimized for feature phones which, in turn, redirects Googlebot-Mobile for smartphones back to desktop site. This results in infinite redirect loop, which we recognize as error.

    Avoiding this mistake is easy: All Googlebot-Mobile user-agents identify themselves as specific mobile devices, and you should treat these Googlebot user-agents exactly like you would treat these devices. For example, Googlebot-Mobile for smartphones currently identifies itself as an iPhone and you should serve it the same response an iPhone user would get.

  • Unplayable videos on smartphone devices. Many websites embed videos in a way that works well on desktops but is unplayable on smartphone devices. For example, if content requires Adobe Flash, it won't be playable on an iPhone or on Android versions 4.1 and higher.

Although we covered only two types of mistakes here, it's important for webmasters to focus on avoiding all of the common smartphone website misconfigurations. Try to test your site on as many different mobile devices and operating systems, or their emulators, as possible, including testing the videos included on your site. Doing so will improve the mobile web, make your users happy, and allow searchers to experience and experience your content fully.

As always, please ask in our forums if you have any questions.

By Yoshikiyo Kato, Software Engineer, on behalf of Mobile Search team, and , Webmaster Trends Analyst

URL: http://googlewebmastercentral.blogspot.com/2013/06/changes-in-rankings-of-smartphone_11.html

Friday, June 14, 2013

[Gd] Retiring Chrome Frame

| More

Chromium Blog: Retiring Chrome Frame

The main goal of the Chromium project has always been to help unlock the potential of the open web.  We work closely with the industry to standardize, implement and evangelize web technologies that help enable completely new types of experiences, and push the leading edge of the web platform forward.

But in 2009, many people were using browsers that lagged behind the leading edge. In order to reach the broadest base of users, developers often had to either build multiple versions of their applications or not use the new capabilities at all. We created Chrome Frame — a secure plug-in that brings a modern engine to old versions of Internet Explorer — to allow developers to bring better experiences to more users, even those who were unable to move to a more capable browser.

Today, most people are using modern browsers that support the majority of the latest web technologies. Better yet, the usage of legacy browsers is declining significantly and newer browsers stay up to date automatically, which means the leading edge has become mainstream.

Given these factors we’ve decided to retire Chrome Frame, and will cease support and updates for the product in January 2014. If you are a developer with an app that points users to Chrome Frame, please prompt visitors to upgrade to a modern browser. You can learn more about these changes in our FAQ.

If you’re an IT administrator you can give your employees the full capabilities of a modern browser today, even if you depend on older technology to run certain web apps. Check out Chrome for Business coupled with Legacy Browser Support, which allows employees to switch seamlessly between Chrome and another browser. Chrome is secure, stable and speedy, and runs on all major desktop and mobile OSs. IT admins can also configure 100+ policies to make Chrome fit their needs.

It’s unusual to build something and hope it eventually makes itself obsolete, but in this case we see the retirement of Chrome Frame as evidence of just how far the web has come.

Posted by Robert Shield, Google Chrome Engineer
URL: http://blog.chromium.org/2013/06/retiring-chrome-frame.html

Thursday, June 13, 2013

[Gd] Chrome Beta for Android Update

| More

Chrome Releases: Chrome Beta for Android Update

The Chrome for Android Beta channel has been updated to 28.0.1500.45. This release has a number of crash fixes as well as the following fixes:
  • 244018: <video> non-functional on Transformer TF101 / Acer Iconia / Motorola Xoom
  • 245349: Grey patch/bar displayed while scrolling m.nytimes.com
  • 196702: Searching from google.com stays @google.com page instead of search results URL
  • 239912: Align error page styles
  • 241372: Page goes blank upon changing the device orientation in FIP mode
Known issues:
  • 247030: 'Starting download...' toast displayed too soon when downloading a PDF with Flywheel enabled
  • 243602: Page jumps up and down when loaded in landscape mode
  • 239685: White flash when creating NTP
A partial list of changes in this build is available in the SVN revision log. If you find a new issue, please let us know by filing a bug. More information about Chrome for Android is available on the Chrome site.

Jason Kersey
Google Chrome
URL: http://googlechromereleases.blogspot.com/2013/06/chrome-beta-for-android-update_12.html

[Gd] Beta Channel Update

| More

Chrome Releases: Beta Channel Update

The Beta channel has been updated to 28.0.1500.44 for Windows, Mac, and Chrome Frame, and 28.0.1500.45 for Linux.  Full details about what changes are in this build are available in the SVN revision log.

For more information about features coming to Chrome, check out the Chrome Blog.

Interested in switching release channels? Find out how. If you find a new issue, please let us know by filing a bug.

Anthony Laforge
Google Chrome
URL: http://googlechromereleases.blogspot.com/2013/06/beta-channel-update_12.html

[Gd] Dev Channel Update

| More

Chrome Releases: Dev Channel Update

The Dev Channel has been updated to 29.0.1535.3 for Windows, Linux and Chrome Frame [Update: 29.0.1535.4 for Mac also]. This release fixes a number of crashes, as well as other bugs. A full list of changes is available in the SVN log. Interested in switching release channels? Find out how. If you find a new issue, please let us know by filing a bug.

Jason Kersey
Google Chrome
URL: http://googlechromereleases.blogspot.com/2013/06/dev-channel-update_11.html

[Gd] Flash Player Update for Stable Channel

| More

Chrome Releases: Flash Player Update for Stable Channel

We are currently updating Flash Player to 11.7.700.225 for Windows and Mac to all Stable channel (Chrome 27) users.

If you find a new issue, please let us know by filing a bug.

Karen Grunberg
Google Chrome
URL: http://googlechromereleases.blogspot.com/2013/06/flash-player-update-for-stable-channel.html

[Gd] Google App Engine 1.8.1 Released

| More

Cloud Platform Blog: Google App Engine 1.8.1 Released

Hot on the heels of this year’s Google I/O, Google App Engine 1.8.1 is now released. Below are some of the significant changes that are part of 1.8.1.



Search API

In 1.8.1 we’re moving the Search API to Preview. The Search API allows your application to perform Google-like searches over structured data. You can search across several different types of data (plain text, HTML, atom, numbers, dates, and geographic locations).



This Preview release is one step closer to General Availability (GA). Between now and then we will only be making slight modifications to the API and adding an SLA. Finally, as part of the Preview, we will begin charging for operations and storage. Pricing details can be found here. Note that prices may change between now and GA.



Source Push-to-Deploy

App Engine now supports deployment of Python and PHP applications via the Git tool. Once you complete the initial setup steps, you will be ready to deploy apps with the same ease with which you push code to a git repository:

% git push appengine master

If you’re interested in test-driving this feature, you can get started here.



Google Cloud Storage Client Library

To improve access to Google Cloud Storage from App Engine, we’re now announcing the Preview release of the Cloud Storage Client Library. This client library contains much of the functionality available in the Files API (Python | Java), but provides stronger integrity guarantees and a better overall developer experience. The Cloud Storage Files API (currently Experimental) and the Cloud Storage Client Library overlap significantly, so we plan to decommission the Files API in a future release. The Cloud Storage Client Library will be upgraded to GA in an upcoming App Engine release, so we strongly encourage developers to start making the move over the next few months.



Task Queues

Many developers have requested an API for enqueuing tasks in asynchronously and this is now available in 1.8.1. Developers can quickly add tasks to any Task Queue without blocking, allowing your applications to process requests more efficiently.



Datastore

There are 2 significant Google Cloud Datastore changes in this release. First, we have improved performance by changing the Datastore default auto ID policy to use scattered IDs. This change will take effect for all versions of your app uploaded with the 1.8.1 SDK. Second, Python developers will be pleased to learn that the NDB library now supports ‘DISTINCT’ queries.



We covered a lot of new content at Google I/O last month, so check out the full session videos to see the latest across Google Cloud Platform.



The complete list of features and bug fixes for 1.8.1 can be found in our release notes. For App Engine coding questions and answers check us out on Stack Overflow, and for general discussion and feedback, find us on our Google Group.



-Posted by Chris Ramsdale, Product Manager
URL: http://googlecloudplatform.blogspot.com/2013/06/google-app-engine-181-released.html

[Gd] Cube Slam meets Google Cloud Platform

| More

Cloud Platform Blog: Cube Slam meets Google Cloud Platform

The Google Creative Lab team has built another fun Chrome Experiment called Cube Slam. Cube Slam connects players into a three dimensional, virtual gaming arena, complete with real-time audio, video, and data feeds.



Cube Slam demonstrates some of the coolest new web technologies, all in one application:


  • WebRTC - WebRTC, a free, open project that provides Real-Time Communications (RTC) capabilities via simple Javascript APIs is built right into Chrome and other modern browsers (no plug-in required!). Cube Slam uses WebRTC to transmit a live audio and video feed from your opponent’s webcam, so you can see and hear your opponent react as the game unfolds.

  • WebGL - Using a combination of WebGL and the three.js JavaScript library, Cube Slam provides smooth, responsive, and fast rendering of 3D graphics.

  • Google Cloud Platform - The game is hosted by a scalable, high performance back end written in Go and served by Google App Engine. A STUN/TURN server running on Google Compute Engine is used to exchange data across and between firewalls.




Cube Slam is an open source project, available via this public repository. You can read more about the technology behind Cube Slam.



Here’s a screenshot of one of our engineers playing Cube Slam live:







We hope this Chrome experiment inspires you to build your own applications harnessing the Google Cloud Platform and the latest web technologies built right into Chrome!



- Posted by Marc Cohen, Developer Programs Engineer
URL: http://googlecloudplatform.blogspot.com/2013/06/cube-slam-meets-google-cloud-platform.html

[Gd] Google BigQuery gets bigger, faster, and smarter with big result sets and new analytics functions

| More

Cloud Platform Blog: Google BigQuery gets bigger, faster, and smarter with big result sets and new analytics functions

Google BigQuery is designed to make it easy to analyze large amounts of data quickly. Today we announced several updates that give BigQuery the ability to handle arbitrarily large result sets, use window functions for advanced analytics, and cache query results. You are also getting new UI features, larger interactive quotas, and a new convenient tiered pricing scheme. In this post we'll dig further into the technical details of these new features.



Large results

BigQuery is able to process terabytes of data, but until today BigQuery could only output up to 128 MB of compressed data per query. Many of you asked for more and from now on BigQuery will be able to output results as large as the largest tables our customers have ever had.



To get this benefit, you should enable the new "--allow_large_results" flag when issuing a query job, and specify a destination table. All results will be saved to the new specified table (or appended, if the table exists). In the updated web UI these options can be found under the new "Enable Options" menu.



With this feature, you can run big transformations on your tables, plus get big subsets of data to further analyze from the new table.



Window functions

BigQuery's power is in the ability to interactively run aggregate queries over terabytes of data, but sometimes counts and averages are not enough. That's why BigQuery also lets you calculate quantiles, variance and standard deviation, as well as other advanced functions.



To make BigQuery even more powerful, today we are adding support for window functions (also known as "analytical functions") for ranking, percentiles, and relative row navigation. These new functions give you different ways to rank results, explore distributions and percentiles, and traverse results without the need for a self join.



To introduce these functions with an advanced example, let's use the dataset we collected from the Data Sensing Lab at Google I/O. With the percentile_cont() function it's easy to get the median temperature over each room:

SELECT percentile_cont(0.5) OVER (PARTITION BY room ORDER BY data) AS median, room
FROM [io_sensor_data.moscone_io13]
WHERE sensortype='temperature'
In this example, each original data row shows the median temperature for each room. To visualize it better, it's a good idea to group all results by room with an outer query:

SELECT MAX(median) AS median, room FROM (
SELECT percentile_cont(0.5) OVER (PARTITION BY room ORDER BY data) AS median, room
FROM [io_sensor_data.moscone_io13]
WHERE sensortype='temperature'
)
GROUP BY room
We can add an additional outer query, to rank the rooms according to which one had the coldest median temperature. We'll use one of the new ranking window functions, dense_rank():

SELECT DENSE_RANK() OVER (ORDER BY median) rank, median, room FROM (
SELECT MAX(median) AS median, room FROM (
SELECT percentile_cont(0.5) OVER (PARTITION BY room ORDER BY data) AS median, room
FROM [io_sensor_data.moscone_io13]
WHERE sensortype='temperature'
)
GROUP BY room
)
We've updated the documentation with descriptions and examples for each of the new window functions. Note that they require the OVER() clause, with an optional PARTITION BY and sometimes required ORDER BY arguments. ORDER BY tells the window function what criteria to use to rank items, while PARTITION BY allows you to define multiple groups to be analyzed independently of each other.



The window functions don't work with the big GROUP EACH BY and JOIN EACH BY operators, but they do work with the traditional GROUP BY and JOIN BY. As a reminder, we announced GROUP EACH BY and JOIN EACH BY last March, to allow large join and group operations.



Query caching

BigQuery now remembers values that you've previously computed, saving you time and the cost of recalculating the query. To maintain privacy, queries are cached on a per-user basis. Cached results are only returned for tables that haven't changed since the last query, or for queries that are not dependent on non-deterministic parameters (such as the current time). Reading cached results is free, but each query still counts against the max number of queries per day quota.

Query results are kept cached for 24 hours, on a best effort basis.

You can disable query caching with the new flag --use_cache in bq, or "useQueryCache" in the API. This feature is also accessible with the new query options on the BigQuery Web UI.



BigQuery Web UI: Query validator, cost estimator, and abandonment

The BigQuery UI gets even better: You'll get instant information while writing a query if its syntax is valid. If the syntax is not valid, you'll know where the error is. If the syntax is valid, the UI will inform you how much the query would cost to run. This feature is also available with the bq tool and API, using the --dry_run flag.

An additional improvement: When running queries on the UI, previously you had to wait until its completion before starting another one. Now you have the option to abandon it, to start working on the next iteration of the query without waiting for the abandoned one.



Pricing updates

Starting in July, BigQuery pricing becomes more affordable for everyone: Data storage costs are going from $0.12/GB/month to $0.08/GB/month. And if you are a high-volume user, you'll soon be able to opt-in for tiered query pricing, for even better value.



Bigger quota

To support larger workloads we're doubling interactive query quotas for all users, from 200GB + 1 concurrent query, to 400 GB of concurrent queries + 2 additional queries of unlimited size.

These updates make BigQuery a faster, smarter, and even more affordable solution for ad hoc analysis of extremely large datasets. We expect they'll help to scale your projects, and we hope you'll share your use cases with us on Google+.



The BigQuery UI features a collection of public datasets for you to use when trying out these new features. To get started, visit our sign-up page and Quick Start guide. You should take a look at our API docs, and ask questions about BigQuery development on Stack Overflow. Finally, don't forget to give us feedback and join the discussion on our Cloud Platform Google+ page.



- Posted by Felipe Hoffa, Developer Programs Engineer
URL: http://googlecloudplatform.blogspot.com/2013/06/google-bigquery-bigger-faster-smarter-analytics-functions.html

[Gd] Building Google Apps Extensions running on Google Cloud Platform

| More

Cloud Platform Blog: Building Google Apps Extensions running on Google Cloud Platform

Today’s post is from Alex Kennberg, VP of Engineering at Synergyse. In this post, Alex describes how their company uses Google Cloud Platform to build their training solutions for Google Apps.



Synergyse chose to focus on enhancing training for Google Apps because of the continuous innovation it brings to the enterprise and education spaces. We built Synergyse Training for Google Apps, a fully interactive, measurable and scalable training solution that has been deployed throughout organizations and educational institutions globally.



We chose Google Cloud Platform and Google Chrome Extension as our technology stack. Going with Google App Engine is a perfect fit for us, because as a cloud service we don’t have to worry about IT issues with our servers and we get automatic scaling. At an early stage of development and deployment it might be especially hard to predict what the next year of usage is going to be like. App Engine allows us to focus on our product and not worry about fine details of operating the backend as much. App Engine also seamlessly connects to other Google services.







It’s a pleasure to be able to import a new service by simply importing the right official library. In matter of hours, I was able to drop in authentication (OpenID), database (Google Cloud SQL), storage and OAuth. Everything worked as expected. Since the libraries handle most of the hard work, our backend code is very lean and allows most things to be done by the client-side Javascript.



Pro Tip: Before deploying the latest code to App Engine, change the version in “appengine-web.xml” file. Next, go to the App Engine Dashboard and select your app. Choose Versions in the menu. From here you can choose who gets what version of your backend. Default version is served to everyone, while traffic splitting lets you test new versions on a smaller set of users first. For staging, we force our extension to access a specific version of the backend by pointing it to ..appspot.com.



In order to overlay our user interface (player, menu, etc) on top of Google Apps we built a Chrome Extension. The extension is written in Javascript, using standard browser APIs, jQuery and Google Closure. Specifically, our templates for the HTML parts are using Closure Templates and Javascript is compiled with Closure Compiler.









Google Closure Templates are well designed in that they have short-form commands and discourage me from adding complexity into the views. They also translate into readable Javascript and work well with the compiler. I use the compiler to help catch bugs, minify and obfuscate our code. There are compiler extern files that declare Chrome Extension and jQuery APIs here. To watch for code changes and automatically re-compile the templates and Javascript, I made this open source project.



Synergyse Training for Google Apps uses Google Cloud Platform and Google Chrome Extension to deliver its training to people around the world. With Google App Engine we get security, reliability and automatic scaling out of the box, which lets us focus on core product development. Google Chrome is a perfect vehicle for overlaying Synergyse user interface on top of Google Apps using the latest standard web technologies, and makes for an easy deployment process to our customers.



- Contributed by Alex Kennberg, Vice President, Synergyse





URL: http://googlecloudplatform.blogspot.com/2013/06/building-google-apps-extensions-running-on-google-cloud-platform.html

[Gd] Play Cube Slam, a real-time WebRTC video game

| More

Chromium Blog: Play Cube Slam, a real-time WebRTC video game

Cube Slam is a Chrome Experiment built with WebRTC, an open web technology that lets you communicate in real-time in the browser (and in this case, play an old-school arcade game with your friends) without downloading and installing any plug-ins. In this post, we wanted to explain a bit about how Cube Slam works.

Cube Slam uses getUserMedia to access your webcam and microphone (with your permission, of course), RTCPeerConnection to stream your video to a friend, and RTCDataChannel to transfer the bits that keep the gameplay in sync. If you and your friend are behind firewalls, RTCPeerConnection uses a TURN relay server (hosted on Google Compute Engine) to make the connection. However, when there are no firewalls in the way, the entire game happens directly peer-to-peer, reducing latency for players and server costs for developers.


Cube Slam is the first large-scale application to use RTCDataChannel, which provides an API similar to WebSocket, but sends the data over the RTCPeerConnection peer-to-peer link. RTCDataChannel sends data securely, and supports an "unreliable" mode for cases where you want high performance but don't care about every single packet making it across the network. In cases like games where low delay often matters more than perfect delivery, this ensures that a single stray packet doesn't slow down the whole app.

RTCDataChannel supports unreliable mode in desktop Chrome today. We're working on implementing the latest WebRTC spec, where we'll use the standard SCTP protocol to support reliable mode as well. WebRTC will also be available on Chrome for Android later this year, and you can try it now by flipping “Enable WebRTC Android” in chrome://flags. Several browsers are currently working on implementing WebRTC, and we’re looking forward to the day when you can have a Cube Slam face-off against your friends on any browser and any device.

To learn more about the tech in Cube Slam, you can check out our technology page and source code. Disable the shields! Destroy the screen! Have fun!

Posted by Justin Uberti, Software Engineer and Channeler of Data
URL: http://blog.chromium.org/2013/06/play-cube-slam-real-time-webrtc-video.html

[Gd] Race across screens and platforms, powered by the mobile web

| More

Chromium Blog: Race across screens and platforms, powered by the mobile web

You may have seen our recent demo of Racer at Google I/O, and wondered how it was made. So today we wanted to share some of the web technologies that made this Chrome Experiment “street-legal” in a couple of months. Racer was built to show what’s possible on today’s mobile devices using an entirely in-browser experience. The goal was to create a touch-enabled experience that plays out across multiple screens (and speakers). Because it was built for the web, it doesn’t matter if you have a phone or a tablet running Android or iOS, everyone can jump right in and play.
   
Racer required two things: speedy pings and a consistent experience across screens. We delivered our minimal 2D vector drawings to each device’s HTML5 Canvas using the Paper.js vector library. Paper.js can handle the path math for our custom race track shapes without getting lapped. To eke out all the frame rate possible on such a large array of devices we rendered the cars as image sprites on a DOM layer above the Canvas, while positioning and rotating them using CSS3’s transform: matrix().

Racer’s sound experience is shared across multiple devices using the Web Audio API (available in latest iOS and Android M28 beta). Each device plays one slice of Giorgio Moroder’s symphony of sound—requiring five devices at once to hear his full composition. A constant ping from the server helps synchronize all device speakers allowing them to bump to one solid beat. Not only is the soundtrack divided across devices, it also reacts to each driver’s movements in real time—the accelerating, coasting, careening, and colliding rebalances the presence of every instrument.

To sync your phones or tablets, we use WebSockets, which enables rapid two-way communication between devices via a central server. WebSocket technology is just the start of what multi-device apps of the future might use. We’re looking forward to when WebRTC data channels—the next generation of speedy Web communication—is supported in the stable channel of Chrome for mobile. Then we’ll be able to deliver experiences like Racer with even lower ping times and without bouncing messages via a central server. Racer’s backend was built on the Google Cloud Platform using the same structure and web tools as another recent Chrome Experiment, Roll It.

To get an even more detailed peek under the hood, we just published two new case studies on our HTML5 Rocks site. Our friends at Plan8 wrote one about the sound engineering and Active Theory wrote one about the front-end build. You can also join the team at Active Theory for a Google Developers Live event this Thursday, June 13, 2013 at 3pm GMT for an in depth discussion.

Posted by Pete “Spinout“ LePage, Developer Advocate
URL: http://blog.chromium.org/2013/06/race-across-screens-and-platforms.html

Monday, June 10, 2013

[Gd] Beta Channel Update for Chrome OS

| More

Chrome Releases: Beta Channel Update for Chrome OS

The Beta channel has been updated to 28.0.1500.35 (Platform version: 4100.38.5) for Samsung Chromebooks. This build contains a number of bug fixes, security updates and feature enhancements.

Some highlights of these changes are:
  • New fullscreen mode - hit the fullscreen button to hide the Chrome toolbar until you hover at the top for a more immersive browsing experience.
  • Fixed choppy video issues with YouTube videos in HTML5 mode on Samsung Chromebooks (231975)
  • Fixed crashes with enabling certain extensions (233414)
  • Fixed issue when connecting via slower 3G connections may be unable to login when creating a new user. (239139)
  • Fixed issue when logging in via a captive portal access point does not show a login screen. (237214)
  • Several crash fixes
Known Issues:
  • Playing certain formats of high definition videos in the Media Player may generate an error while playing the video. (245505)
  • On Chromebook Pixel systems, a message may appear saying "Charging not reliable" when the battery is close to fully charged.

If you find new issues, please let us know by visiting our help site or filing a bug. Interested in switching channels? Find out how. You can submit feedback using ‘Report an issue...’ in the Chrome menu (3 horizontal bars in the upper right corner of the browser).

Danielle Drew
Google Chrome
URL: http://googlechromereleases.blogspot.com/2013/06/beta-channel-update-for-chrome-os_8.html

Sunday, June 9, 2013

[Gd] For shipping jewelry, Apps Script is golden

| More

Google Apps Developer Blog: For shipping jewelry, Apps Script is golden

Editor’s Note: Guest author Jason Gordon is a co-founder of Beth Macri Designs — Arun Nagarajan

Beth Macri Designs creates jewelry from the point of view of a structural engineer. The forms are designed using generative 3D software systems and materialized using 3D printing technologies. Our company understands that to make beautiful fine jewelry, 3D printing is only the first step; traditional jewelry craft is then employed for final production. After our first product, The Hidden Message Necklace, was recently featured on The View as part of its Valentine's Day Gift Guide, we had a lot of orders to ship out. As soon as the mail leaves the building, though, the process is literally out of our hands: something unexpected was bound to happen to at least one or two packages. Several package-tracking services exist, but getting the names and tracking numbers into them was a cut-and-paste operation.

I knew that all of the tracking numbers were being delivered by email and I had already set up a Gmail filter to archive them and apply a label. With a little help from Google Apps Script, I knew I could automatically parse those emails and add them to my account on PackageTrackr (which syncs to their newer service, Fara).

The script supports reading emails from multiple shipping providers and is set up so one could easily add more. Every 30 minutes on a time-driven trigger, using the Gmail service, the script runs and looks through unread emails from the shipping provider label, then parses the name and tracking number out of each one. The provider, tracking number, and recipient are stored in a JavaScript array.


function getUSPSConversations(){
return GmailApp.search("in:usps is:unread subject:(Click-N-Ship)");
}

function matchUSPSHTML(data){
var out = [];
var track_num = data.match(
/TrackConfirmAction\Winput\.action\WtLabels\=(\d+)/g);
var to = data.match(/Shipped.to.*[\r\n]*.*>([a-zA-Z\s-_]*)<br>/g);
for(i in track_num){
var o = new Object();
var track = track_num[i].match(/(\d+)/g);
var person = to[i].match(/>([a-zA-Z\s-_]+)<br>/);
var myPerson = person[1].replace(/(\r\n|\n|\r)/gm,"")
o["number"]=track[0];
o["carrier"]="USPS";
o["person"]=myPerson;
out.push(o);
}
return out;
}

You can parse all of your different shipping providers in one run of the script. After the all of the shipment emails are read, it composes an email to PackageTrackr to give it all of the tracking numbers it just harvested.


var user = Session.getActiveUser().getEmail();
if(data.length > 0){
for(d in data){
body += this["formatForPackageTrackr"](data[d]["number"],
data[d]["carrier"], data[d]["person"]);
}

GmailApp.sendEmail("track@packagetrackr.com", "Add Packages",
body, {bcc: user});
}

function formatForPackageTrackr(tracking_num, service, person){
return "#:" + tracking_num + " " + service + " " + person + "\n";
}

Down the line, other shipping providers could be added such as UPS and Fedex. Additionally, more tracking services could be added instead of just PackageTrackr.

Jason Gordon   profile

Jason Gordon is a co-founder at jewelry startup Beth Macri Designs. He is responsible for software development, logistics and e-commerce. While working at Beth Macri Designs, Jason gets to find creative ways to put his software development skills to work to improve logistics and user experience.


URL: http://googleappsdeveloper.blogspot.com/2013/06/for-shipping-jewelry-apps-script-is.html