Google Docs viewer on Mobile Browsers

(Cross-posted with the Google Docs Blog)

Last week, we announced that the Google Docs viewer supports .doc and .docx attachments. Today we’re also releasing a mobile version of the Google Docs viewer for Android, iPhone and iPad to help you view PDFs, .ppt, .doc and .docx files you’ve uploaded to your documents list, without needing to download the file.

With our mobile viewer you can switch quickly between pages and pan/zoom within a page. On your iPhone and iPad, you can pinch to zoom in or out.

You can try it out by going to docs.google.com on your Android-powered device, iPad or iPhone and select any document in these formats that you've previously uploaded. Let us know what you think in the Mobile Help Forum.

Exercising Our Remote Application Removal Feature

[This post is by Rich Cannings, Android Security Lead. — Tim Bray]

Every now and then, we remove applications from Android Market due to violations of our Android Market Developer Distribution Agreement or Content Policy. In cases where users may have installed a malicious application that poses a threat, we’ve also developed technologies and processes to remotely remove an installed application from devices. If an application is removed in this way, users will receive a notification on their phone.

Recently, we became aware of two free applications built by a security researcher for research purposes. These applications intentionally misrepresented their purpose in order to encourage user downloads, but they were not designed to be used maliciously, and did not have permission to access private data — or system resources beyond permission.INTERNET. As the applications were practically useless, most users uninstalled the applications shortly after downloading them.

After the researcher voluntarily removed these applications from Android Market, we decided, per the Android Market Terms of Service, to exercise our remote application removal feature on the remaining installed copies to complete the cleanup.

The remote application removal feature is one of many security controls Android possesses to help protect users from malicious applications. In case of an emergency, a dangerous application could be removed from active circulation in a rapid and scalable manner to prevent further exposure to users. While we hope to not have to use it, we know that we have the capability to take swift action on behalf of users’ safety when needed.

This remote removal functionality — along with Android’s unique Application Sandbox and Permissions model, Over-The-Air update system, centralized Market, developer registrations, user-submitted ratings, and application flagging — provides a powerful security advantage to help protect Android users in our open environment.

Android Market Problem

Earlier today we had a brief outage in Android Market. For a period of about thirty minutes, some users were unable to find any apps. The problem was detected and corrected, and we believe the user experience is now back to normal. We apologize for the outage.

The Froyo Code Drop

[This post is by Jean-Baptiste Queru, who moves truck-loads of source code in and out of the Googleplex. — Tim Bray]

Today is one of those days that has my heart racing; we’ve just released the source code for Android 2.2. This is a big step forward for the entire Android ecosystem. Please don’t melt the servers down again while trying to download that latest source code.

This blog typically talks about developing Android applications using the SDK and NDK. However, the skills of a platform contributor aren’t fundamentally different from those of an application developer. Those are simply different roles using the same skill set. I’m providing an update here to the experienced Android programmers all around the world on some of the recent developments in the Android Open-Source Project.

For Google engineers working on Android, releases are mostly known by their code names which are chosen alphabetically after tasty treats. I’ll call Android 2.2 “Froyo” throughout this post, since that was its code name. Raw version numbers don’t make me salivate as much as the thought of a cold dessert in the California summer.

Let’s have a look at some cool aspects of the new Froyo source, and let’s then take a few steps back to look at other noteworthy aspects of the Android Open-Source Project.

I had been increasingly involved in all previous open-source releases of Android, from testing the initial code drop to doing all the open-source-related git-level work in Eclair. Following that path, Froyo is the first release where my primary focus has been the Android Open-Source Project from start to finish. I thank the entire Android team for helping me all along with much of that work. Here are some aspects of Froyo that I am proud of, and that kept me busy for the last few months:

  • Hundreds of platform changes that people everywhere uploaded to the Android Open-Source Project were accepted and merged into Froyo. That process is now a well-oiled machine and will translate well to future contributions.

  • The open-source release happened in a single step. The whole source tree for the entire Android 2.2 platform is now available, with its full change history. That will accelerate everyone’s migration to Froyo from older releases. It is also already fully merged into the open-source master tree. Consequently, we can immediately review and accept platform contributions based on Froyo. That will therefore reduce the risk of merge conflicts between contributions to the open-source tree and changes in Google’s internal master tree where those contributions are meant to end up.

  • In order to make it easier for device manufacturers and custom system builders to use Froyo, we’ve restructured our source tree to better separate closed-source modules from open-source ones. We’ve made many changes to the open-source code itself to remove unintentional dependencies on closed-source software. We’ve also incorporated into the core platform all the configuration files necessary to build the source code of Android Open-Source Project on its own. You can now build and boot a fully open-source system image out of the box, for the emulator, as well as for Dream (ADP1), Sapphire (ADP2), and Passion (Nexus One).

  • Speaking of device support, we also open-sourced several additional hardware-related libraries that had been closed-source in previous releases, which will open the door to more contributions. Some examples are the recovery UI code for Dream, Sapphire and Passion, and the interface between the media framework and Qualcomm chipsets.

Besides the Froyo source code release, I wanted to mention several other improvements in the Android Open-Source Project:

  • We’ve been receiving contributions from more than twenty different companies, and many individuals. We have close to 4,000 registered users on the Gerrit code review server, with an average of 2 contributions per user. Those contributions have been in all areas of the system, from the depth of the C library all the way to the UI of the lock screen. They’ve covered the full range of complexities, from fixing typos in the documentation or reformatting code to adding developer-visible APIs or user-visible features. I want to thank everyone who got involved for their work and patience.

  • We’re now responding to platform contributions faster, with most changes currently getting looked at within a few business days of being uploaded, and few changes staying inactive for more than a few weeks at a time. We’re trying to review early and review often. As I’m typing this, only about a dozen platform contributions haven’t been looked at yet, with the oldest of those being 3 days old. More than 90% of contributions to the platform code itself have been actively looked at during the last 2 weeks. I hope that the speedy process will lead to more interactivity during the code reviews. I realize nevertheless that time differences around the world can make real-time communication a challenge.

  • Over the last 2 months, we’ve reached a final decision on more than 1,000 changes that were uploaded to our public Gerrit server. That means that those changes were either accepted or rejected after being reviewed. The high quality of the contributions we’ve been receiving throughout the history of the Android Open-Source Project has allowed us to steadily merge about 80% of them into the main repository, from where they migrate to official releases. That means that an average of 20 changes have been accepted through the Android Open-Source Project into the public git repositories every business day over those last 2 months.

  • We recently created two new official Google Groups related to the Android Open-Source Project. Android-building is meant to specifically discuss build issues (be sure to search the archives thoroughly before posting). Android-contrib is used to discuss actual contributions (don’t post if you don’t really intend to contribute and follow through on the review process, and if you haven’t already spent an hour or two researching things on your own).

  • We’re developing the developer tools directly in the open-source project, with no work in those areas happening behind closed doors. This covers the Eclipse plug-in and the emulator, and more than a dozen other SDK-related tools.

  • Once a platform version is open-sourced, all improvements to the Compatibility Test Suite related to that version are made directly to the open-source tree. In fact, release 2 of the 2.1 CTS was done 100% that way, with the development, testing and release process all happening straight in the open-source tree. This is now true for Froyo as well, and we are now accepting contributions into the Froyo branch of the CTS project.

I believe that those last two aspects are important to application developers. If you’re an application developer and you’d like to improve the tools that you and your fellow developers use, the process to make changes in that area is now a lot more transparent. Similarly, if during application development you find incompatibilities between devices and believe that those incompatibilities aren’t within the letter or the spirit of Android compatibility, you can help improve the situation by contributing a CTS test for that area.

With Android 2.2 now being available to the open-source world, and with the review process working smoothly, I’m looking forward to seeing a lot more high-quality contributions that will be used to build future versions of Android. My sweetest dream, which is also my worst nightmare, is to have so many contributions that I can’t keep up with them. Please don’t wake me up.

Hands-on at OSCON

This year at OSCON we and O’Reilly are co-presenting Android Hands-on. The event is on the evening of Wednesday, July 21 after the Expo-hall reception. Led by Google Android experts, the Hands-on will run from 7:00 pm-10:00 pm, and will be intense, technical, and structured. The goal is that you leave the room with foundation skills for writing interesting code for an open-source stack that runs on a pocket-sized Internet-connected device.

Some specific topics we’ll cover:

  • Porting existing C codebases to Android

  • Integrating Android apps with RESTful web interfaces

  • UI patterns and best practices

Sign-up in advance is required, and is restricted to registered full conference attendees and speakers. Spaces are limited and will be given out on a first-come-first-served basis.

If you’re considering participating, you might want to keep these things in mind:

  • Android apps are written in the Java programming language, with the exception of some performance-critical code (typically for games) written in C and C++. If you aren’t familiar with at least one of these languages, you won’t benefit much from the session.

  • To prepare, you might want to go to developer.android.com and download the SDK (available for Linux, Mac, and even Windows). Try building the HelloAndroid app and running it on the emulator.

  • You might also benefit from attending the Android for Java developers tutorial on Monday and/or Dan Morrill’s Android: The Whats and Wherefores session on Wednesday morning.

Google Maps for Android Helps You Find the Right Place, Catch a Train, and Add Latitude Friends

Hot off the presses, Google Maps for Android version 4.3 has added a couple new features to help you quickly choose the right place to grab dinner, catch the next train, and find friends to add in Latitude.

Have you ever had to make a split decision for dinner plans while on the go? Now, you can see a snapshot of what people are saying about places right on search result pages. Instead of poring through full reviews, you can start by looking at what the most frequently mentioned aspects about a place are, such as food, service, atmosphere, or anything else people keep mentioning. Just like on Place Pages for your computer, the color-coded bar gives an overview of how positively people are talking about any individual aspect. Tap one to see more details like the actual review snippets. Whether you’re looking for top-notch service or a vibrant ambiance, you can now pick just the right place to go.



You’ll also find a new addition to public transit station pages: upcoming schedules. Select any transit station icon directly from the map and open its page by tapping the window. You’ll find a handy list of the next departure times for any subways, trains, or buses that are leaving from that station where transit info is available.



In Google Latitude, we wanted to make it even easier for you to find friends and family with whom you’d like to share your location. Right at the bottom of your Latitude friend list, you’ll be able to quickly start sharing your location with long lost friends, loved ones, and others from your Google Contacts. Add any suggested friends by tapping the + icon and sending them a sharing request. Tap the x and they’ll be dropped from your suggested friends list. Don’t worry -- you can always add them later by choosing “Add friends” from the Latitude menu.



Get the latest version of Maps by searching for Google Maps in Android Market from Android 1.6+ phones. If you’re reading this on your phone, just tap here. Version 4.3 is available in all the countries and languages where Maps is currently available.

Visit our Help Center to learn more, ask questions in our Help Forum, or give us suggestions and vote on other people’s on the Mobile Product Ideas page.

Google Voice for everyone

(Cross-posted with the Google Voice Blog)

A little over a year ago, we released an early preview of Google Voice, our web-based platform for managing your communications. We introduced one number to ring all your phones, voicemail that works like email, free calls and text messages to the U.S. and Canada, low-priced international calls and more—the only catch was you had to request and receive an invite to try it out. Today, after lots of testing and tweaking, we’re excited to open up Google Voice to the public, no invitation required.

Over the past year, we’ve introduced a mobile web app, an integrated voicemail player in Gmail, the ability to use Google Voice with your existing number and more. Over a million of you are now actively using Google Voice, and many of the features released over the past year (like SMS to email and our Chrome extension) came as a result of your suggestions, so thanks!

If you haven’t yet tried Google Voice, we can’t wait for you to try it out and let us know what you think. Check out our revamped features page to learn about everything Google Voice can do, and if you haven’t seen it yet, this video provides a good overview in less than two minutes:



We’re proud of the progress we’ve made with Google Voice over the last few years, and we’re still just scratching the surface of what’s possible when you combine your regular phone service with the latest web technology. It’s even more amazing to think about how far communication has come over the last couple hundred years. To put things in context, we created this infographic to visualize some recent history of human communication and how Google Voice uses the web to help people communicate in more ways than ever before (click the image for a larger version):



Update 10:55 am: Just to clarify, though we've opened up sign-ups, Google Voice is still limited to everyone in the U.S. for now.

“Annyeong Haseyo! “안녕하세요” to Google Search by Voice in Korean

The creation of the Korean alphabet by Sejong the Great was a wonderful advance, enabling literacy for the masses. However, even with the latest smartphone keyboards, entering the characters of the Korean alphabet is still challenging.

Less than two weeks ago we announced Google Search by Voice in French, German, Italian, and Spanish, and today we are happy to announce support for Korean.



Google Search by Voice in action on Android and iPhone

Google Search by Voice will be available soon, pre-installed, on the Samsung Galaxy S and the Nexus One. It is also accessible in the Android Market and via Google Mobile App for the BlackBerry and the iPhone. You can download Google Mobile App at m.google.com

So if you speak Korean, grab your phone and bid Google Search by Voice a hearty Annyeong Haseyo! 안녕하세요!

Future-Proofing Your App

[This post is by Reto Meier AKA @retomeier, who wrote the book on Android App development. — Tim Bray]

As a developer, I’m excited by Android’s potential as a single development platform that can make my apps available on a wide range of devices. From smartphones to televisions, Android is now being used on an increasingly diverse collection of hardware.

Last year’s Android SDK 1.6 release was the first to introduce support for variations in device hardware, paving the way for devices like the HTC Tattoo — a small screen device with a non-autofocus camera. Future devices, like Google TV, may not include some of the hardware features that we now expect, such a accelerometers and telephony.

We all want our apps available on as many devices as possible, but on some hardware they might just not make sense, so it’s important that apps are available only on the devices where they do.

Android Market Rule #1: Don't let existing applications break on new devices

As curators of the Android Market, one of our most important responsibilities is ensuring consumers and developers can trust the Market to only deliver applications to devices capable of running them.

The Android SDK includes built-in support for specifying which hardware features your application needs, ensuring that when we see more hardware variations, the Market will make sure your apps are available everywhere (and only where) they make sense.

Specify the hardware your app needs using the application Manifest

That includes the target and minimum SDK versions, supported screen sizes, and the required hardware features without which your app will “break”. You can specify the hardware features your app requires by adding a uses-feature node to your manifest.

<uses-feature android:name="android.hardware.microphone" />

By updating your manifest now to include all the hardware features you require, you effectively opt out of future hardware that won’t be capable of properly supporting your app.

Android Market Rule #2: Don't let existing applications break on new devices

In extreme cases — such as the introduction of small screen sizes in Android 1.6 — developers will be required to explicitly opt in their apps before they will be made visible in the Market on these new devices.

In other cases the Android Market will analyze the permissions requested by an app to determine if it implies a dependence on any particular hardware. For example, requiring the CALL_PHONE permission strongly implies the need for telephony hardware.

Until we provide a more convenient tool, you can use AAPT in the SDK to analyze your apps (2.2 SDK required) and see which device requirements are being implicitly added to your application:

aapt dump badging myApp.apk

Where your app uses a particular hardware feature, but you know (and have tested) that it will still work without it, you can specify it as optional by setting the required attribute to false.

<uses-feature android:name="android.hardware.telephony" android:required="false" />

Ensure your application manifest correctly identifies what hardware your app needs, and what is optional

With the uses-feature name strings now available, you can ensure right now that your app appears in the Market, where appropriate, on current and future hardware devices rather than waiting for the devices to be released.

It's in your interest as a developer to ensure your apps work well, and are available, on as many devices as possible and appropriate. Now is the time to test your applications and update your Manifest to opt in to all hardware configurations which you support, and opt out of those that don’t make sense.

The Iterative Web App: New Compose Interface for Gmail on iPad

In April 2009, we announced a new version of Gmail for mobile for iOS and Android. Among the improvements was a complete redesign of the web application's underlying code which allows us to more rapidly develop and release new features that users have been asking for, as explained in our first post. We'd like to introduce The Iterative Web App, a series where we will continue to release features for Gmail for mobile. Today: New Compose Interface on the iPad.

Today we’re happy to announce an improved experience for writing emails on Gmail web interface for iPad. When you write an email you’ll now get a big full screen compose window instead of splitting the screen between your inbox and the compose view. More text is visible at once and there are no more distractions with messages on the side. We’ve also fixed problems that prevented scrolling on long messages. Thanks to everyone who reported the issue via the ‘Send feedback’ feature at the bottom of the screen.



We’re continuing to experiment with the large touchscreen and tablet form factor so send more feedback if you have suggestions. To try out Gmail on the iPad, just go to gmail.com in Safari. Please note that the new interface is only available in US English for now.



Posted by Craig Wilkinson, Software Engineer, Google Mobile

Game Development for Android: A Quick Primer

[This post is by Chris Pruett, an outward-facing Androider who focuses on the world of games. — Tim Bray]

If you attended Google I/O this year, you might have noticed the large number of game developers showing off their stuff in the Android part of the Developer Sandbox. Unity, EA, Com2Us, Polarbit, Laminar Research, and several others demonstrated high-end games running on Android devices. Part of my role as a Game Developer Advocate for Android is to field requests for information about Android from game developers, and in the last six months the number of requests has gone through the roof. Since there’s clearly a huge amount of interest in Android game development, here’s an overview of how Android games work and what you as a developer should know.

Step One: Decide on a Target Device Class

There are basically two types of devices running Android that you should consider: lower-end devices like the G1 (which I’ll call “first generation” devices), and high-end devices like the Nexus One (”second generation” devices). Though there are a lot of different Android phones on the market, they fall rather neatly into these two classes when it comes to CPU and graphics performance, which are the variables that game developers usually care the most about.

First generation devices are generally phones with HVGA screens, running Android 1.5 or 1.6 (though a few are starting to make their way to 2.1), typically with an 500mhz CPU and hardware accelerated OpenGL ES 1.0 backend. A large number of devices sport internals similar to the G1 (Qualcomm MSM7K CPU/GPU at ~500mhz), so the G1 is representative of this class (and can be safely considered the low end of the spectrum). Based on my tests, these devices can push about 5000 textured, colored, unlit vertices per frame and still maintain 30 frames per second. Using OpenGL ES to draw, I can get just over 250 animating sprites on the screen at 30 frames per second (at 60 fps I can draw just over 100 sprites per frame). These aren’t hard numbers; my benchmarks are fairly primitive, and I’m sure that they can be improved (for example, I haven’t done tests using the GL_OES_point_sprite extension, which the G1 supports). But they should give you an idea of what the first generation class of devices can do.

Second generation devices generally have WVGA screens, much faster CPUs and GPUs, and support for OpenGL ES 2.0. The Nexus One and Verizon Droid by Motorola are both good examples of this class. These devices seem to be about 5x faster than the first generation devices when it comes to raw OpenGL 1.0 performance (I can get at least 27,000 textured unlit colored vertices per frame at 30 frames per second on all of the second generation devices I’ve tested). Using OpenGL ES 2.0 can be even faster, as these devices typically incur some overhead translating OpenGL ES 1.0 commands to their 2.0-native graphics hardware. However, the large screens on these devices often mean that they are fill-bound: the cost of filling the screen with pixels is high enough that it’s often not possible to draw faster than 30 frames per second, regardless of scene complexity.

Since there is a pretty wide performance delta between the first generation class of devices and the second, you should be careful when selecting a target. Based on our metrics about Android versions, first generation devices represent over half of all of the Android phones on the market (as of this writing, anyway; 2.0 devices are growing very quickly). Those games that are able to scale between the first and second generation devices have the largest audience.

Step Two: Pick a Language

If you’re an Android app programmer who’s thinking about getting into game development, chances are you are planning on writing code in Java. If you’re a game development veteran who’s thinking of bringing games to Android, it’s likely that you prefer to do everything in C++.

The side-scrolling action game that I wrote, Replica Island, is entirely Java. It uses OpenGL ES 1.0 to draw and is backwards compatible to Android 1.5. It runs at a good frame rate (close to 60 fps on the G1) across almost all Android devices. In fact, many of the popular games on Android Market were written in Java, so if you’re the type of person who finds coding in C++ like speaking in tongues, you can rest easy in the knowledge that Java on Android is perfectly viable for games.

That said, native code is the way to go if your game needs to run as fast as possible. We’ve just released the fourth revision of our Native Development Kit for Android, and it includes a number of improvements that are particularly useful to game developers. Using the NDK, you can compile your code into a shared library, wrap it in a thin Java shell to manage input and lifecycle events, and do all of the heavy lifting in C++ with regular OpenGL ES APIs. As of Revision 4, you can also draw directly into Java Bitmap pixel buffers from native code, which should be faster than loading bitmaps as GL textures every frame for 2D games that want to do their own scene compositing. Revision 4 also (finally!) includes gdb support for debugging your native code on the device.

You should know that when using the NDK, you don’t have access to Android Framework APIs. There’s no way, for example, to play audio from C++ (though we announced at Google I/O our intention to support OpenSL ES in the future). Some developers use the AudioTrack API to share a direct buffer with native code that mixes and generates a PCM stream on the fly, and many call from C++ into the Java SoundPool interface. Just be aware that for this type of work, a jump through JNI back into Java code is required.

Step Three: Carefully Design the Best Game Ever

Once you have a target system spec and have decided on a development environment, you’re off and running. But before you get too deep into your epic ragdoll physics-based space marine action online RPG with branching endings and a morality system, take a minute to think about your end users. Specifically, there are two areas that require consideration for Android games that you might not be used to: texture compression and input systems.

Texture compression is a way to (surprise!) compress your texture data. It can improve draw performance and let you pack more texture into vram. The problem with texture compression is that different graphics card vendors support different texture formats. The G1 and other MSM7k devices support ATI’s ATITC compression format. The Droid supports PowerVR’s PVRTC format. Nvidia’s Tegra2 platform supports the DXT format. The bad news is, these formats are not compatible. The good news is, all OpenGL ES 2.0 devices (including the Snapdragon-based Nexus One, the OMAP3-based Droid, and Tegra2 devices) support a common format called ETC1. ETC1 isn’t the best texture format (it lacks support for alpha channels), and it isn’t supported on the first generation devices, but it’s the most common format supported (the Android SDK provides a compressor utility (see sdk/tools/etc1tool) and runtime tools for this format).

The bottom line is that if you compress your textures, you’ll need to somehow provide different versions of those textures compressed with different formats. You could do this all in a single apk, or you could download textures from a web site over HTTP, or you could use ETC1 and restrict yourself to only OpenGL ES 2.0 devices. For Replica Island, I just chose not to compress my textures at all and had no problems. You can query the GL_EXTENSIONS string to see what the device you are currently running on supports.

String extensions = " " + gl.glGetString(GL10.GL_EXTENSIONS) + " ";
String version = gl.glGetString(GL10.GL_VERSION);
String renderer = gl.glGetString(GL10.GL_RENDERER);

boolean isSoftwareRenderer = renderer.contains("PixelFlinger");

// On 1.6 and newer, we could use ActivityManager.getDeviceConfigurationInfo() to get the GL version.
// To include 1.5, I'll use the GL version string.
boolean isOpenGL10 = version.contains(" 1.0");
boolean supportsDrawTexture =
extensions.contains(" GL_OES_draw_texture "); // draw_texture extension
boolean supportsETC1 =
extensions.contains(" GL_OES_compressed_ETC1_RGB8_texture "); // standard ETC1 support extension

// VBOs are guaranteed in GLES1.1, but they were an extension under 1.0.
// There's no point in using VBOs when using the software renderer (though they are supported).

boolean supportsVBOs =
!isSoftwareRenderer && (!isOpenGL10 || extensions.contains("vertex_buffer_object "));

You should also think carefully about how your game will be played. Some phones have a trackball, some have a directional pad, some have a hardware keyboard, some support multitouch screens. Others have none of those things. Per the Compatibility Definition Document, all Android devices that have Android Market are required to have a touch screen and three-axis accelerometer, so if you can get away with just tilt and single touch, you don’t need to worry about input much. If you want to take advantage of the various input devices that these phones support (which, based on several thousand comments on Android Market about Replica Island, I wholeheartedly recommend), the Android API will package the events up for you in a standard way.

That said, one of the most dramatic lessons I learned after shipping Replica Island is that users want customizable controls. Even if you have added perfect support for every phone, many users will want to go in and tweak it. Or they prefer the hardware keyboard over their phone’s dpad. Or they prefer tilt controls over trackball controls. My advice: plan on providing customizable controls, both so that you can support phones that have input configurations that you didn’t consider, and also so that you can allow users to tweak the experience to match their preferences.

Step Four: Profit!

The rest is up to you. But before you go, here are a few resources that might come in handy:

  • HeightMapProfiler. This is a simple 3D benchmarking tool that I wrote. It is the source of the performance numbers in this post. You can also use it to test how various GL state affects performance on your device (texture size, texture filtering, mip-mapping, etc).


  • SpriteMethodTest. Another simple benchmarking tool, this one for sprite drawing. This code is also useful as a 2D game skeleton application.


  • GLSurfaceView. This is a Java class that makes it trivial to set up an OpenGL ES application. You can use this code in combination with the NDK or with Java alone.


  • Quake Port. The complete source for an Android port of Quake has been made available by Jack Palevich, an Android team engineer. It’s a great sample of how to mix Java and native code, how to download textures to the sdcard over HTTP, and all kinds of other cool stuff (check out his memory-mapped-to-sdcard texture manager).


  • Replica Island. Here’s the complete source to my game, released under Apache2. Use it as a reference, or to make your own games.


Download Count Problems

Something is apparently wrong in the Android Market. We are getting multiple reports of erroneous download counts. The right people are aware of the situation and are working on it.

[Update, Monday morning: The fix was rolled in early Sunday and it seems as though app developers have their missing downloads back.]

Blogging Round the World

It seems that once or twice a week, I run across an Android-developer-oriented site that I hadn’t previously noticed. There are already a few aggregators and directories, and I think we’re going to need more. But for the moment, here are three pieces of bloggy Android goodness, from Florida, Odessa (Ukraine!), and Sydney. What they have in common is that I previously hadn't encountered any of them.

Font Magic

This is from Florida-based Jeff Vera’s Musings of the Bare Bones Coder, which, although it advertises itself as being about “Coding and managing in the .NET space”, recently ran the excellent Android Development – Using Custom Fonts. You’ve always been able to use your own fonts in your own apps, but the how-to coverage has been light.

How Hot Is It?

Ivan Memruk from Odessa, Ukraine, brings us Mind The Robot, which has a refreshing concern for visual elegance. Speaking of which, soak up the analog steampunk tastiness of Android Custom UI: Making a Vintage Thermometer.

Aussie Rules

In this case, I mean rules for getting your Android project set up for use both via Eclipse and command-line Ant. Daniel Ostermeier and Jason Sankey from Sydney run the Android-dense a little madness, and lay the rules out in Setting Up An Android Project Build. Lots of steps; but very handy for a command-line guy like me.

Settle trivia debates anytime, anywhere

Last month we launched a way to provide short answers to search queries, and it's now available on your iPhone, Palm WebOS or Android-powered device in English. If you’re like us, you may sometimes engage in trivia matches with friends on topics as far ranging as, what continent is Turkey in?, Star Wars release date?, or Augustus’ successor? Now you can settle that debate there and then by searching Google from your mobile; you can speak your question into Google Search on Android or Google Mobile App for iPhone, or you can visit google.com from your mobile browser to type your search.

If your friends challenge the answer provided in Google Search results, you can corroborate the information with a list of websites by clicking on the “Show sources” link. The source list includes the relevant text from each page so you can quickly verify whether Google interpreted the context of the answer correctly. You can also click through to the original website to get all the details.

We continue to work on providing short answers to more questions. Here are some additional examples to try:
  • Who’s taller? [height of kobe bryant] or [height of paul pierce]
  • Geography trivia? [capital of massachusetts], [language in netherlands]
  • Literature trivia? [author of les miserables], [george eliot’s gender]
  • Movie trivia? [release date of shrek], [director of harry potter 3]
  • Music trivia? [composer of four seasons], [birthday of lady gaga]

Making Sense of Multitouch

[This post is by Adam Powell, one of our more touchy-feely Android engineers. — Tim Bray]



The word “multitouch” gets thrown around quite a bit and it’s not always clear what people are referring to. For some it’s about hardware capability, for others it refers to specific gesture support in software. Whatever you decide to call it, today we’re going to look at how to make your apps and views behave nicely with multiple fingers on the screen.

This post is going to be heavy on code examples. It will cover creating a custom View that responds to touch events and allows the user to manipulate an object drawn within it. To get the most out of the examples you should be familiar with setting up an Activity and the basics of the Android UI system. Full project source will be linked at the end.

We’ll begin with a new View class that draws an object (our application icon) at a given position:

public class TouchExampleView extends View {
private Drawable mIcon;
private float mPosX;
private float mPosY;

private float mLastTouchX;
private float mLastTouchY;

public TouchExampleView(Context context) {
this(context, null, 0);
}

public TouchExampleView(Context context, AttributeSet attrs) {
this(context, attrs, 0);
}

public TouchExampleView(Context context, AttributeSet attrs, int defStyle) {
super(context, attrs, defStyle);
mIcon = context.getResources().getDrawable(R.drawable.icon);
mIcon.setBounds(0, 0, mIcon.getIntrinsicWidth(), mIcon.getIntrinsicHeight());
}

@Override
public void onDraw(Canvas canvas) {
super.onDraw(canvas);

canvas.save();
canvas.translate(mPosX, mPosY);
mIcon.draw(canvas);
canvas.restore();
}

@Override
public boolean onTouchEvent(MotionEvent ev) {
// More to come here later...
return true;
}
}

MotionEvent

The Android framework’s primary point of access for touch data is the android.view.MotionEvent class. Passed to your views through the onTouchEvent and onInterceptTouchEvent methods, MotionEvent contains data about “pointers,” or active touch points on the device’s screen. Through a MotionEvent you can obtain X/Y coordinates as well as size and pressure for each pointer. MotionEvent.getAction() returns a value describing what kind of motion event occurred.

One of the more common uses of touch input is letting the user drag an object around the screen. We can accomplish this in our View class from above by implementing onTouchEvent as follows:

@Override
public boolean onTouchEvent(MotionEvent ev) {
final int action = ev.getAction();
switch (action) {
case MotionEvent.ACTION_DOWN: {
final float x = ev.getX();
final float y = ev.getY();

// Remember where we started
mLastTouchX = x;
mLastTouchY = y;
break;
}

case MotionEvent.ACTION_MOVE: {
final float x = ev.getX();
final float y = ev.getY();

// Calculate the distance moved
final float dx = x - mLastTouchX;
final float dy = y - mLastTouchY;

// Move the object
mPosX += dx;
mPosY += dy;

// Remember this touch position for the next move event
mLastTouchX = x;
mLastTouchY = y;

// Invalidate to request a redraw
invalidate();
break;
}
}

return true;
}

The code above has a bug on devices that support multiple pointers. While dragging the image around the screen, place a second finger on the touchscreen then lift the first finger. The image jumps! What’s happening? We’re calculating the distance to move the object based on the last known position of the default pointer. When the first finger is lifted, the second finger becomes the default pointer and we have a large delta between pointer positions which our code dutifully applies to the object’s location.

If all you want is info about a single pointer’s location, the methods MotionEvent.getX() and MotionEvent.getY() are all you need. MotionEvent was extended in Android 2.0 (Eclair) to report data about multiple pointers and new actions were added to describe multitouch events. MotionEvent.getPointerCount() returns the number of active pointers. getX and getY now accept an index to specify which pointer’s data to retrieve.

Index vs. ID

At a higher level, touchscreen data from a snapshot in time may not be immediately useful since touch gestures involve motion over time spanning many motion events. A pointer index does not necessarily match up across complex events, it only indicates the data’s position within the MotionEvent. However this is not work that your app has to do itself. Each pointer also has an ID mapping that stays persistent across touch events. You can retrieve this ID for each pointer using MotionEvent.getPointerId(index) and find an index for a pointer ID using MotionEvent.findPointerIndex(id).

Feeling Better?

Let’s fix the example above by taking pointer IDs into account.

private static final int INVALID_POINTER_ID = -1;

// The ‘active pointer’ is the one currently moving our object.
private int mActivePointerId = INVALID_POINTER_ID;

// Existing code ...

@Override
public boolean onTouchEvent(MotionEvent ev) {
final int action = ev.getAction();
switch (action & MotionEvent.ACTION_MASK) {
case MotionEvent.ACTION_DOWN: {
final float x = ev.getX();
final float y = ev.getY();

mLastTouchX = x;
mLastTouchY = y;

// Save the ID of this pointer
mActivePointerId = ev.getPointerId(0);
break;
}

case MotionEvent.ACTION_MOVE: {
// Find the index of the active pointer and fetch its position
final int pointerIndex = ev.findPointerIndex(mActivePointerId);
final float x = ev.getX(pointerIndex);
final float y = ev.getY(pointerIndex);

final float dx = x - mLastTouchX;
final float dy = y - mLastTouchY;

mPosX += dx;
mPosY += dy;

mLastTouchX = x;
mLastTouchY = y;

invalidate();
break;
}

case MotionEvent.ACTION_UP: {
mActivePointerId = INVALID_POINTER_ID;
break;
}

case MotionEvent.ACTION_CANCEL: {
mActivePointerId = INVALID_POINTER_ID;
break;
}

case MotionEvent.ACTION_POINTER_UP: {
// Extract the index of the pointer that left the touch sensor
final int pointerIndex = (action & MotionEvent.ACTION_POINTER_INDEX_MASK)
>> MotionEvent.ACTION_POINTER_INDEX_SHIFT;
final int pointerId = ev.getPointerId(pointerIndex);
if (pointerId == mActivePointerId) {
// This was our active pointer going up. Choose a new
// active pointer and adjust accordingly.
final int newPointerIndex = pointerIndex == 0 ? 1 : 0;
mLastTouchX = ev.getX(newPointerIndex);
mLastTouchY = ev.getY(newPointerIndex);
mActivePointerId = ev.getPointerId(newPointerIndex);
}
break;
}
}

return true;
}

There are a few new elements at work here. We’re switching on action & MotionEvent.ACTION_MASK now rather than just action itself, and we’re using a new MotionEvent action constant, MotionEvent.ACTION_POINTER_UP. ACTION_POINTER_DOWN and ACTION_POINTER_UP are fired whenever a secondary pointer goes down or up. If there is already a pointer on the screen and a new one goes down, you will receive ACTION_POINTER_DOWN instead of ACTION_DOWN. If a pointer goes up but there is still at least one touching the screen, you will receive ACTION_POINTER_UP instead of ACTION_UP.

The ACTION_POINTER_DOWN and ACTION_POINTER_UP events encode extra information in the action value. ANDing it with MotionEvent.ACTION_MASK gives us the action constant while ANDing it with ACTION_POINTER_INDEX_MASK gives us the index of the pointer that went up or down. In the ACTION_POINTER_UP case our example extracts this index and ensures that our active pointer ID is not referring to a pointer that is no longer touching the screen. If it was, we select a different pointer to be active and save its current X and Y position. Since this saved position is used in the ACTION_MOVE case to calculate the distance to move the onscreen object, we will always calculate the distance to move using data from the correct pointer.

This is all the data that you need to process any sort of gesture your app may require. However dealing with this low-level data can be cumbersome when working with more complex gestures. Enter GestureDetectors.

GestureDetectors

Since apps can have vastly different needs, Android does not spend time cooking touch data into higher level events unless you specifically request it. GestureDetectors are small filter objects that consume MotionEvents and dispatch higher level gesture events to listeners specified during their construction. The Android framework provides two GestureDetectors out of the box, but you should also feel free to use them as examples for implementing your own if needed. GestureDetectors are a pattern, not a prepacked solution. They’re not just for complex gestures such as drawing a star while standing on your head, they can even make simple gestures like fling or double tap easier to work with.

android.view.GestureDetector generates gesture events for several common single-pointer gestures used by Android including scrolling, flinging, and long press. For Android 2.2 (Froyo) we’ve also added android.view.ScaleGestureDetector for processing the most commonly requested two-finger gesture: pinch zooming.

Gesture detectors follow the pattern of providing a method public boolean onTouchEvent(MotionEvent). This method, like its namesake in android.view.View, returns true if it handles the event and false if it does not. In the context of a gesture detector, a return value of true implies that there is an appropriate gesture currently in progress. GestureDetector and ScaleGestureDetector can be used together when you want a view to recognize multiple gestures.

To report detected gesture events, gesture detectors use listener objects passed to their constructors. ScaleGestureDetector uses ScaleGestureDetector.OnScaleGestureListener. ScaleGestureDetector.SimpleOnScaleGestureListener is offered as a helper class that you can extend if you don’t care about all of the reported events.

Since we are already supporting dragging in our example, let’s add support for scaling. The updated example code is shown below:

private ScaleGestureDetector mScaleDetector;
private float mScaleFactor = 1.f;

// Existing code ...

public TouchExampleView(Context context, AttributeSet attrs, int defStyle) {
super(context, attrs, defStyle);
mIcon = context.getResources().getDrawable(R.drawable.icon);
mIcon.setBounds(0, 0, mIcon.getIntrinsicWidth(), mIcon.getIntrinsicHeight());

// Create our ScaleGestureDetector
mScaleDetector = new ScaleGestureDetector(context, new ScaleListener());
}

@Override
public boolean onTouchEvent(MotionEvent ev) {
// Let the ScaleGestureDetector inspect all events.
mScaleDetector.onTouchEvent(ev);

final int action = ev.getAction();
switch (action & MotionEvent.ACTION_MASK) {
case MotionEvent.ACTION_DOWN: {
final float x = ev.getX();
final float y = ev.getY();

mLastTouchX = x;
mLastTouchY = y;
mActivePointerId = ev.getPointerId(0);
break;
}

case MotionEvent.ACTION_MOVE: {
final int pointerIndex = ev.findPointerIndex(mActivePointerId);
final float x = ev.getX(pointerIndex);
final float y = ev.getY(pointerIndex);

// Only move if the ScaleGestureDetector isn't processing a gesture.
if (!mScaleDetector.isInProgress()) {
final float dx = x - mLastTouchX;
final float dy = y - mLastTouchY;

mPosX += dx;
mPosY += dy;

invalidate();
}

mLastTouchX = x;
mLastTouchY = y;

break;
}

case MotionEvent.ACTION_UP: {
mActivePointerId = INVALID_POINTER_ID;
break;
}

case MotionEvent.ACTION_CANCEL: {
mActivePointerId = INVALID_POINTER_ID;
break;
}

case MotionEvent.ACTION_POINTER_UP: {
final int pointerIndex = (ev.getAction() & MotionEvent.ACTION_POINTER_INDEX_MASK)
>> MotionEvent.ACTION_POINTER_INDEX_SHIFT;
final int pointerId = ev.getPointerId(pointerIndex);
if (pointerId == mActivePointerId) {
// This was our active pointer going up. Choose a new
// active pointer and adjust accordingly.
final int newPointerIndex = pointerIndex == 0 ? 1 : 0;
mLastTouchX = ev.getX(newPointerIndex);
mLastTouchY = ev.getY(newPointerIndex);
mActivePointerId = ev.getPointerId(newPointerIndex);
}
break;
}
}

return true;
}

@Override
public void onDraw(Canvas canvas) {
super.onDraw(canvas);

canvas.save();
canvas.translate(mPosX, mPosY);
canvas.scale(mScaleFactor, mScaleFactor);
mIcon.draw(canvas);
canvas.restore();
}

private class ScaleListener extends ScaleGestureDetector.SimpleOnScaleGestureListener {
@Override
public boolean onScale(ScaleGestureDetector detector) {
mScaleFactor *= detector.getScaleFactor();

// Don't let the object get too small or too large.
mScaleFactor = Math.max(0.1f, Math.min(mScaleFactor, 5.0f));

invalidate();
return true;
}
}

This example merely scratches the surface of what ScaleGestureDetector offers. The listener methods receive a reference to the detector itself as a parameter that can be queried for extended information about the gesture in progress. See the ScaleGestureDetector API documentation for more details.

Now our example app allows a user to drag with one finger, scale with two, and it correctly handles passing active pointer focus between fingers as they contact and leave the screen. You can download the final sample project at http://code.google.com/p/android-touchexample/. It requires the Android 2.2 SDK (API level 8) to build and a 2.2 (Froyo) powered device to run.

From Example to Application

In a real app you would want to tweak the details about how zooming behaves. When zooming, users will expect content to zoom about the focal point of the gesture as reported by ScaleGestureDetector.getFocusX() and getFocusY(). The specifics of this will vary depending on how your app represents and draws its content.

Different touchscreen hardware may have different capabilities; some panels may only support a single pointer, others may support two pointers but with position data unsuitable for complex gestures, and others may support precise positioning data for two pointers and beyond. You can query what type of touchscreen a device has at runtime using PackageManager.hasSystemFeature().

As you design your user interface keep in mind that people use their mobile devices in many different ways and not all Android devices are created equal. Some apps might be used one-handed, making multiple-finger gestures awkward. Some users prefer using directional pads or trackballs to navigate. Well-designed gesture support can put complex functionality at your users’ fingertips, but also consider designing alternate means of accessing application functionality that can coexist with gestures.

Fun on the Autobahn: Google Maps Navigation in 11 more Countries

There’s nothing quite like driving through Europe in the summer. In the past week, I’ve seen the beautiful Val d’Aosta, the Swiss Alps, the Cathedral in Chartres, and travelled through the Channel Tunnel as I road-tripped from Milan to Geneva, Zürich to Stuttgart, and on through Paris to London. Why the burst of mileage? Well, I’ve been testing Google Maps Navigation version 4.2. Yes, road-testing it around Europe was a grueling process, but somebody had to do it :)

Today we’re launching Google Maps Navigation version 4.2 in Austria, Belgium, Canada, Denmark, France, Germany, Italy, the Netherlands, Portugal, Spain, and Switzerland for Android devices 1.6 and higher. Google Maps Navigation is an Internet-connected GPS navigation or ‘satnav’ system that provides turn-by-turn voice guidance as a free feature of Google Maps.


On my test trip, I found a number of Navigation features useful:
  • While driving through the Loire Valley, I put my French language skills to the test by finding my destination with Search by voice (now launched in French, German, Italian, and Spanish for Android 2.0 and higher);
  • I previewed a typical British roundabout with Street View to see exactly where I’d need to exit before getting there in person;
  • I satisfied my craving for moules frites by searching for it along my route;
  • I kept the gas stations layer on to ensure I’d always know where the nearest petrol station was, just in case;
  • And, of course, the turn-by-turn voice guidance kept me on-track to my destination -- despite my sometimes spotty connection in mountain tunnels -- thanks to the way Maps Navigation saves the route on your device when you start.
Google Maps Navigation (beta) with Search by voice is available in version 4.2 of Google Maps, on Android devices 1.6 and higher. To download Google Maps version 4.2, search for Google Maps in Android Market.

Try Google Maps Navigation in your local country and language today -- and have a great time touring around the Continent this summer if you get the chance!

Salut! Willkommen! Benvenuto! ¡Bienvenido! Google Search by Voice in French, German, Italian and Spanish

Here’s a test for the German speakers out there: which is faster...saying Geschwindigkeitsbeschränkung (German for speed limit), or typing the same query character-by-character?

Voice has always been the most natural way to interact with a phone -- speaking is typically faster and easier than typing. We first developed Search by voice for English, and then for Mandarin Chinese and Japanese. Today we’re excited to welcome speakers of French, German, Italian, and Spanish.

Images of Google Search by Voice in Italian (Android), German (iPhone), Spanish (BlackBerry)

Our goal is to bring Google Search by voice to speakers of all languages. We follow a rigorous process to add each new language or dialect. Working directly with native speakers in each country, we spend weeks collecting spoken utterances to create the specific models which power the service. Our helpers are asked to read popular queries in their native tongue, in a variety of acoustic conditions such as in restaurants, out on busy streets, and inside cars. We also construct, for each language, a vocabulary of over one million recognizable words. It’s no small feat, but we love doing it.

Note that our new language models are designed for accents from Spain, France, Italy, and Germany. If you speak one of the new languages with another accent (for example, German in Austria, French in Switzerland, or Spanish in Mexico), Search by voice may not work so well for you.

How you get started with Google Search by voice depends on what kind of phone you have. If your phone runs Android 2.1 or later, and you have the Quick Search Box installed, all you have to do is tap the microphone icon to start a voice-powered search. iPhone and BlackBerry users who already have Google Mobile App installed can enable voice search by selecting the new languages from the settings panel within the app.

If you have Android 1.6 or 2.1 (Donut or Eclair), and you have already installed the Search by voice application, starting later today voice search will return recognition results for French, German, Italian or Spanish if your phone has one of those languages chosen in ‘Language and keyboard’ settings. If you do not have the Search by voice application, you can install it from Android Market on your phone - search for ‘voice search‘. This application is only available in the Android Markets for France, Germany, Italy and Spain.

To get Google Mobile App for iPhone, search for ‘Google Mobile App’ in the App Store or follow this link. BlackBerry and Nokia S60 users should visit m.google.com using their phone’s browser.

Learn more at http://mobile.google.com and select your country in the footer.

So if you speak French, Italian, German, Spanish, grab your phone and bid Google Search by voice a hearty Salut! Willkommen! Benvenuto! ¡Bienvenido!

Application Visibility Issues

Recently we became aware that some Android applications were not visible on the Android Market. While we were internally troubleshooting and qualifying the fix and communicating with our hardware partners, developers were trying hard to get our help through various technical support sites. Regrettably, we fell short of our own standard for customer support by not communicating the issue to our developers and how we were working to resolve it.

We’re pleased to say that the issue looks to be resolved with a patch, and to our best knowledge, all apps that were previously impacted are up and visible again. Again, apologies for the delay and inconvenience this created.

Making AdSense for Mobile Applications Work With More Ad Networks

We’re always working to help people grow their mobile business with ads. Today we’re making our tools even more flexible by allowing publishers participating in our AdSense for Mobile Applications beta program to use third-party mediators. Mediation lets app developers use multiple ad networks simultaneously - reaching a greater pool of advertisers, and focusing more time on building their apps, and less time managing ad inventory.

AdSense for Mobile Application beta publishers will now be able to manage their ad inventory using third party ad serving mediators, as long as their apps meet these conditions, including:
  • Using the latest version of the AdSense for Mobile Applications SDK
  • Abiding by the AdSense for mobile applications terms and program policies
  • Agreeing to Google’s privacy policy
We think this is great news for our AdSense for Mobile Application publishers because it will allow them to easily optimize and fill their ad inventory. We believe this also shows our commitment to develop the mobile advertising ecosystem by ensuring that the most optimal ad is shown to users, and enabling our AdSense partners to earn more regardless of which networks they use.

To learn more about monetizing for mobile, or to learn more about how to apply for the AdSense for Mobile Applications beta program, please visit www.google.com/mobileads/developer.

Update 6/3/10 10:30 PST: We had previously written that this helps developers avoid implementing individual SDKs from each ad network, but this is not the case.

Posted by Jim Kelm, Product Manager, Google Mobile Ads

Google Search for mobile now includes mobile app results

As mobile apps continue to proliferate in stores like Android Market and the iPhone App Store, finding relevant information on the web about these apps is becoming more important to help you decide which apps to download. In addition to helping you find the mobile app information you’re looking for, Google Search for mobile now also makes it easier for you to get the actual apps themselves while you’re searching.

As of today, if you go to Google.com on your iPhone or Android-powered device and search for an app, we’ll show special links and content at the top of the search results. You can tap these links to go directly to the app’s Android Market or iPhone App Store page. You can also get a quick look at some of the app’s basic details including the price, rating, and publisher. These results will appear when your search pertains to a mobile application and relevant, well-rated apps are found. For example, try searching for download shazam on your Android-powered device or bank of america app on your iPhone.

Mobile app search results are available today in the US, with other countries and devices planned for the future.


Allowing applications to play nice(r) with each other: Handling remote control buttons

[This post is by Jean-Michel Trivi, an engineer working on the Android Media framework, whose T-shirt of the day reads “all your media buttons are belong to you”. — Tim Bray]

Many Android devices come with the Music application used to play audio files stored on the device. Some devices ship with a wired headset that features transport control buttons, so users can for instance conveniently pause and restart music playback, directly from the headset.

But a user might use one application for music listening, and another for listening to podcasts, both of which should be controlled by the headset remote control.

If your media playback application creates a media playback service, just like Music, that responds to the media button events, how will the user know where those events are going to? Music, or your new application?

In this article, we’ll see how to handle this properly in Android 2.2. We’ll first see how to set up intents to receive “MEDIA_BUTTON” intents. We’ll then describe how your application can appropriately become the preferred media button responder in Android 2.2. Since this feature relies on a new API, we’ll revisit the use of reflection to prepare your app to take advantage of Android 2.2, without restricting it to API level 8 (Android 2.2).

An example of the handling of media button intents

In our AndroidManifest.xml for this package we declare the class RemoteControlReceiver to receive MEDIA_BUTTON intents:

<receiver android:name="RemoteControlReceiver">
<intent-filter>
<action android:name="android.intent.action.MEDIA_BUTTON" />
</intent-filter>
</receiver>

Our class to handle those intents can look something like this:

public class RemoteControlReceiver extends BroadcastReceiver {
@Override
public void onReceive(Context context, Intent intent) {
if (Intent.ACTION_MEDIA_BUTTON.equals(intent.getAction())) {
/* handle media button intent here by reading contents */
/* of EXTRA_KEY_EVENT to know which key was pressed */
}
}
}

In a media playback application, this is used to react to headset button presses when your activity doesn’t have the focus. For when it does, we override the Activity.onKeyDown() or onKeyUp() methods for the user interface to trap the headset button-related events.

However, this is problematic in the scenario we mentioned earlier. When the user presses “play”, what application should start playing? The Music application? The user’s preferred podcast application?

Becoming the “preferred” media button responder

In Android 2.2, we are introducing two new methods in android.media.AudioManager to declare your intention to become the “preferred” component to receive media button events: registerMediaButtonEventReceiver() and its counterpart, unregisterMediaButtonEventReceiver(). Once the registration call is placed, the designated component will exclusively receive the ACTION_MEDIA_BUTTON intent just as in the example above.

In the activity below were are creating an instance of AudioManager with which we will register our component. We therefore create a ComponentName instance that references our intended media button event responder.

public class MyMediaPlaybackActivity extends Activity {
private AudioManager mAudioManager;
private ComponentName mRemoteControlResponder;

@Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
mAudioManager = (AudioManager)getSystemService(Context.AUDIO_SERVICE);
mRemoteControlResponder = new ComponentName(getPackageName(),
RemoteControlReceiver.class.getName());
}

The system handles the media button registration requests in a “last one wins” manner. This means we need to select where it makes sense for the user to make this request. In a media playback application, proper uses of the registration are for instance:

  • when the UI is displayed: the user is interacting with that application, so (s)he expects it to be the one that will respond to the remote control,


  • when content starts playing (e.g. content finished downloading, or another application caused your service to play content)


Registering is here performed for instance when our UI comes to the foreground:

    @Override
public void onResume() {
super.onResume();
mAudioManager.registerMediaButtonEventReceiver(
mRemoteControlResponder);
}

If we had previously registered our receiver, registering it again will push it up the stack, and doesn’t cause any duplicate registration.

Additionally, it may make sense for your registered component not to be called when your service or application is destroyed (as illustrated below), or under conditions that are specific to your application. For instance, in an application that reads to the user her/his appointments of the day, it could unregister when it’s done speaking the calendar entries of the day.

    @Override
public void onDestroy() {
super.onDestroy();
mAudioManager.unregisterMediaButtonEventReceiver(
mRemoteControlResponder);
}

After “unregistering”, the previous component that requested to receive the media button intents will once again receive them.

Preparing your code for Android 2.2 without restricting it to Android 2.2

While you may appreciate the benefit this new API offers to the users, you might not want to restrict your application to devices that support this feature. Andy McFadden shows us how to use reflection to take advantage of features that are not available on all devices. Let’s use what we learned then to enable your application to use the new media button mechanism when it runs on devices that support this feature.

First we declare in our Activity the two new methods we have used previously for the registration mechanism:

    private static Method mRegisterMediaButtonEventReceiver;
private static Method mUnregisterMediaButtonEventReceiver;

We then add a method that will use reflection on the android.media.AudioManager class to find the two methods when the feature is supported:

private static void initializeRemoteControlRegistrationMethods() {
try {
if (mRegisterMediaButtonEventReceiver == null) {
mRegisterMediaButtonEventReceiver = AudioManager.class.getMethod(
"registerMediaButtonEventReceiver",
new Class[] { ComponentName.class } );
}
if (mUnregisterMediaButtonEventReceiver == null) {
mUnregisterMediaButtonEventReceiver = AudioManager.class.getMethod(
"unregisterMediaButtonEventReceiver",
new Class[] { ComponentName.class } );
}
/* success, this device will take advantage of better remote */
/* control event handling */
} catch (NoSuchMethodException nsme) {
/* failure, still using the legacy behavior, but this app */
/* is future-proof! */
}
}

The method fields will need to be initialized when our Activity class is loaded:

    static {
initializeRemoteControlRegistrationMethods();
}

We’re almost done. Our code will be easier to read and maintain if we wrap the use of our methods initialized through reflection by the following. Note in bold the actual method invocation on our AudioManager instance:

    private void registerRemoteControl() {
try {
if (mRegisterMediaButtonEventReceiver == null) {
return;
}
mRegisterMediaButtonEventReceiver.invoke(mAudioManager,
mRemoteControlResponder);

} catch (InvocationTargetException ite) {
/* unpack original exception when possible */
Throwable cause = ite.getCause();
if (cause instanceof RuntimeException) {
throw (RuntimeException) cause;
} else if (cause instanceof Error) {
throw (Error) cause;
} else {
/* unexpected checked exception; wrap and re-throw */
throw new RuntimeException(ite);
}
} catch (IllegalAccessException ie) {
Log.e(”MyApp”, "unexpected " + ie);
}
}

private void unregisterRemoteControl() {
try {
if (mUnregisterMediaButtonEventReceiver == null) {
return;
}
mUnregisterMediaButtonEventReceiver.invoke(mAudioManager,
mRemoteControlResponder);

} catch (InvocationTargetException ite) {
/* unpack original exception when possible */
Throwable cause = ite.getCause();
if (cause instanceof RuntimeException) {
throw (RuntimeException) cause;
} else if (cause instanceof Error) {
throw (Error) cause;
} else {
/* unexpected checked exception; wrap and re-throw */
throw new RuntimeException(ite);
}
} catch (IllegalAccessException ie) {
System.err.println("unexpected " + ie);
}
}

We are now ready to use our two new methods, registerRemoteControl() and unregisterRemoteControl() in a project that runs on devices supporting API level 1, while still taking advantage of the features found in devices running Android 2.2.

Google Maps for BlackBerry Gets Biking Directions, Sharing, and New Search Results

Grab your BlackBerry, hop on your bike, and have your friends join you with Google Maps for mobile! After adding biking directions and sharing for Android folks a few weeks ago, we're happy to announce that Google Maps 4.2 for BlackBerry can also let you get biking directions, quickly see helpful info when searching, and share places with friends.

Biking directions
If you’ve been using Google Maps on your computer to get biking directions, trails, and lanes, you can now head out for a ride using just your BlackBerry. When getting directions in Google Maps, just choose to travel by bicycle to get an optimal bicycling route in the U.S. If you’re in the mood for a more scenic ride, you’ll also see the Bicycling layer on the map which shows dedicated bike-only trails (dark green), roads with bike lanes (light green), or roads that are good for biking but lack a dedicated lane (dashed green). You can always turn on this layer from the Layers menu to devise your own route.




Search and Share
The next time you're searching for a late night bite of pizza, you'll see a redesigned list view of results with pictures and ratings. Select one to see a simplified search results page with easy-access buttons for directions, calling, etc. and all the info you'll need right below. Select a section, such as “Reviews,” to see more. A new “share this place” option lets you send anyone place info, such as its address or phone number, by email or text message. In addition to specific places like a restaurant, you can also share any location you select on the map -- including a snapshot of where you are at the moment -- to help folks meet you outside or right where you’re standing!



To get started, go to m.google.com/maps in your BlackBerry's browser and install version 4.2. In case you’ve had previous installation hiccups, we've also fixed some issues with permissions and BlackBerry Enterprise Server installation on some 5.0 devices.

Learn more in the Help Center, ask questions in our Help Forum, or give us suggestions and vote on other people’s on the Mobile Product Ideas page.