Stay connected to the market, wherever you are

Mobile phones are great for keeping in touch with the latest information, but when there’s a lot of data to look at, a small screen can be a drawback. For financial queries, where you might want to see stock quotes, the latest news, a market overview or portfolio details, we’ve just launched a new approach in Google search.

To try it out, go to google.com on your iPhone or Android-powered device (2.1 or later) and search for your favourite stock ticker symbol.

The first thing you’ll see is an interactive graph shown on a card - you can switch views to different date ranges by tapping on the buttons below the graph.




If you swipe the card from right to left, you’ll get the latest financial news for the company.

Swipe again for a market overview, and if you’re logged in to your Google account and have created a Google Finance portfolio, a further swipe will show a summary of your stock portfolio. Give it a try on your mobile device now to see how it works.

This feature is available in English with support for more languages coming soon. We hope you enjoy it and find it useful.

Identifying App Installations

[The contents of this post grew out of an internal discussion featuring many of the usual suspects who’ve been authors in this space. — Tim Bray]

In the Android group, from time to time we hear complaints from developers about problems they’re having coming up with reliable, stable, unique device identifiers. This worries us, because we think that tracking such identifiers isn’t a good idea, and that there are better ways to achieve developers’ goals.

Tracking Installations



It is very common, and perfectly reasonable, for a developer to want to track individual installations of their apps. It sounds plausible just to call TelephonyManager.getDeviceId() and use that value to identify the installation. There are problems with this
: First, it doesn’t work reliably (see below). Second, when it does work, that value survives device wipes (“Factory resets”) and thus you could end up making a nasty mistake when one of your customers wipes their device and passes it on to another person.

To track installations, you could for example use a UUID as an identifier, and simply create a new one the first time an app runs after installation. Here is a sketch of a class named “Installation” with one static method Installation.id(Context context). You could imagine writing more installation-specific data into the INSTALLATION file.

public class Installation {
private static String sID = null;
private static final String INSTALLATION = "INSTALLATION";

public synchronized static String id(Context context) {
if (sID == null) {
File installation = new File(context.getFilesDir(), INSTALLATION);
try {
if (!installation.exists())
writeInstallationFile(installation);
sID = readInstallationFile(installation);
} catch (Exception e) {
throw new RuntimeException(e);
}
}
return sID;
}

private static String readInstallationFile(File installation) throws IOException {
RandomAccessFile f = new RandomAccessFile(installation, "r");
byte[] bytes = new byte[(int) f.length()];
f.readFully(bytes);
f.close();
return new String(bytes);
}

private static void writeInstallationFile(File installation) throws IOException {
FileOutputStream out = new FileOutputStream(installation);
String id = UUID.randomUUID().toString();
out.write(id.getBytes());
out.close();
}
}

Identifying Devices

Suppose you feel that for the needs of your application, you need an actual hardware device identifier. This turns out to be a tricky problem.

In the past, when every Android device was a phone, things were simpler: TelephonyManager.getDeviceId() is required to return (depending on the network technology) the IMEI, MEID, or ESN of the phone, which is unique to that piece of hardware.

However, there are problems with this approach:

  • Non-phones: Wifi-only devices or music players that don’t have telephony hardware just don’t have this kind of unique identifier.


  • Persistence: On devices which do have this, it persists across device data wipes and factory resets. It’s not clear at all if, in this situation, your app should regard this as the same device.


  • Privilege:It requires READ_PHONE_STATE permission, which is irritating if you don’t otherwise use or need telephony.


  • Bugs: We have seen a few instances of production phones for which the implementation is buggy and returns garbage, for example zeros or asterisks.


Mac Address

It may be possible to retrieve a Mac address from a device’s WiFi or Bluetooth hardware. We do not recommend using this as a unique identifier. To start with, not all devices have WiFi. Also, if the WiFi is not turned on, the hardware may not report the Mac address.

Serial Number

Since Android 2.3 (“Gingerbread”) this is available via android.os.Build.SERIAL. Devices without telephony are required to report a unique device ID here; some phones may do so also.

ANDROID_ID

More specifically, Settings.Secure.ANDROID_ID. This is a 64-bit quantity that is generated and stored when the device first boots. It is reset when the device is wiped.

ANDROID_ID seems a good choice for a unique device identifier. There are downsides: First, it is not 100% reliable on releases of Android prior to 2.2 (“Froyo”). Also, there has been at least one widely-observed bug in a popular handset from a major manufacturer, where every instance has the same ANDROID_ID.

Conclusion

For the vast majority of applications, the requirement is to identify a particular installation, not a physical device. Fortunately, doing so is straightforward.

There are many good reasons for avoiding the attempt to identify a particular device. For those who want to try, the best approach is probably the use of ANDROID_ID on anything reasonably modern, with some fallback heuristics for legacy devices.

Word of Mouth: Introducing Voice Search for Indonesian, Malaysian and Latin American Spanish

(Read more about the launch of Voice Search in Latin American Spanish on the Google América Latina blog)

Today we are excited to announce the launch of Voice Search in Indonesian, Malaysian, and Latin American Spanish, making Voice Search available in over two dozen languages and accents since our first launch in November 2008. This accomplishment could not have been possible without the help of local users in the region - really, we couldn’t have done it without them. Let me explain:

In 2010 we launched Voice Search in Dutch, the first language where we used the “word of mouth” project, a crowd-sourcing effort to collect the most accurate voice data possible.The traditional method of acquiring voice samples is to license the data from companies who specialize in the distribution of speech and text databases. However, from day one we knew that to build the most accurate Voice Search acoustic models possible, the best data would come from the people who would use Voice Search once it launched - our users.

Since then, in each country, we found small groups of people who were avid fans of Google products and were part of a large social network, either in local communities or on online. We gave them phones and asked them to get voice samples from their friends and family. Everyone was required to sign a consent form and all voice samples were anonymized. When possible, they also helped to test early versions of Voice Search as the product got closer to launch.

Building a speech recognizer is not just limited to localizing the user interface. We require thousands of hours of raw data to capture regional accents and idiomatic speech in all sorts of recording environments to mimic daily life use cases. For instance, when developing Voice Search for Latin American Spanish, we paid particular attention to Mexican and Argentinean Spanish. These two accents are more different from one another than any other pair of widely-used accents in all of South and Central America. Samples collected in these countries were very important bookends for building a version of Voice Search that would work across the whole of Latin America. We also chose key countries such as Peru, Chile, Costa Rica, Panama and Colombia to bridge the divergent accent varieties.

As an International Program Manager at Google, I have been fortunate enough to travel around the world and meet many of our local Google users. They often have great suggestions for the products that they love, and word of mouth was created with the vision that our users could participate in developing the product. These Voice Search launches would not have been possible without the help of our users, and we’re excited to be able to work together on the product development with the people who will ultimately use our products.

In-app Billing Launched on Android Market

[This post is by Eric Chu, Android Developer Ecosystem. —Dirk Dougherty]

Today, we're pleased to announce the launch of Android Market In-app Billing to developers and users. As an Android developer, you will now be able to publish apps that use In-app Billing and your users can make purchases from within your apps.

In-app Billing gives you more ways to monetize your apps with try-and-buy, virtual goods, upgrades, and other billing models. If you aren’t yet familiar with In-app Billing, we encourage you to learn more about it.

Several apps launching today are already using the service, including Tap Tap Revenge by Disney Mobile; Comics by ComiXology; Gun Bros, Deer Hunter Challenge HD, and WSOP3 by Glu Mobile; and Dungeon Defenders: FW Deluxe by Trendy Entertainment.

To try In-app Billing in your apps, start with the detailed documentation and complete sample app provided, which show how to implement the service in your app, set up in-app product lists in Android Market, and test your implementation. Also, it’s absolutely essential that you review the security guidelines to make sure your billing implementation is secure.

We look forward to seeing how you’ll use this new service in your apps!

In-App Billing on Android Market: Ready for Testing

[This post is by Eric Chu, Android Developer Ecosystem. —Dirk Dougherty]

Back in January we announced our plan to introduce Android Market In-app Billing this quarter. We're pleased to let you know that we will be launching In-app Billing next week.

In preparation for the launch, we are opening up Android Market for upload and end-to-end testing of your apps that use In-app Billing. You can now upload your apps to the Developer Console, create a catalog of in-app products, and set prices for them. You can then set up accounts to test in-app purchases. During these test transactions, the In-app Billing service interacts with your app exactly as it will for actual users and live transactions.

Note that although you can upload apps during this test development phase, you won’t be able to actually publish the apps to users until the full launch of the service next week.


To get you started, we’ve updated the developer documentation with information about how to set up product lists and test your in-app products. Also, it is absolutely essential that you review the security guidelines to make sure your billing implementation is secure.

We encourage you start uploading and testing your apps right away.

Memory Analysis for Android Applications

[This post is by Patrick Dubroy, an Android engineer who writes about programming, usability, and interaction on his personal blog. — Tim Bray]

The Dalvik runtime may be garbage-collected, but that doesn't mean you can ignore memory management. You should be especially mindful of memory usage on mobile devices, where memory is more constrained. In this article, we're going to take a look at some of the memory profiling tools in the Android SDK that can help you trim your application's memory usage.

Some memory usage problems are obvious. For example, if your app leaks memory every time the user touches the screen, it will probably trigger an OutOfMemoryError eventually and crash your app. Other problems are more subtle, and may just degrade the performance of both your app (as garbage collections are more frequent and take longer) and the entire system.

Tools of the trade

The Android SDK provides two main ways of profiling the memory usage of an app: the Allocation Tracker tab in DDMS, and heap dumps. The Allocation Tracker is useful when you want to get a sense of what kinds of allocation are happening over a given time period, but it doesn't give you any information about the overall state of your application's heap. For more information about the Allocation Tracker, see the article on Tracking Memory Allocations. The rest of this article will focus on heap dumps, which are a more powerful memory analysis tool.

A heap dump is a snapshot of an application's heap, which is stored in a binary format called HPROF. Dalvik uses a format that is similar, but not identical, to the HPROF tool in Java. There are a few ways to generate a heap dump of a running Android app. One way is to use the Dump HPROF file button in DDMS. If you need to be more precise about when the dump is created, you can also create a heap dump programmatically by using the android.os.Debug.dumpHprofData() function.

To analyze a heap dump, you can use a standard tool like jhat or the Eclipse Memory Analyzer (MAT). However, first you'll need to convert the .hprof file from the Dalvik format to the J2SE HPROF format. You can do this using the hprof-conv tool provided in the Android SDK. For example:

hprof-conv dump.hprof converted-dump.hprof

Example: Debugging a memory leak

In the Dalvik runtime, the programmer doesn't explicitly allocate and free memory, so you can't really leak memory like you can in languages like C and C++. A "memory leak" in your code is when you keep a reference to an object that is no longer needed. Sometimes a single reference can prevent a large set of objects from being garbage collected.

Let's walk through an example using the Honeycomb Gallery sample app from the Android SDK. It's a simple photo gallery application that demonstrates how to use some of the new Honeycomb APIs. (To build and download the sample code, see the instructions.) We're going to deliberately add a memory leak to this app in order to demonstrate how it could be debugged.

Imagine that we want to modify this app to pull images from the network. In order to make it more responsive, we might decide to implement a cache which holds recently-viewed images. We can do that by making a few small changes to ContentFragment.java. At the top of the class, let's add a new static variable:

private static HashMap<String,Bitmap> sBitmapCache = new HashMap<String,Bitmap>();

This is where we'll cache the Bitmaps that we load. Now we can change the updateContentAndRecycleBitmap() method to check the cache before loading, and to add Bitmaps to the cache after they're loaded.

void updateContentAndRecycleBitmap(int category, int position) {
if (mCurrentActionMode != null) {
mCurrentActionMode.finish();
}

// Get the bitmap that needs to be drawn and update the ImageView.

// Check if the Bitmap is already in the cache
String bitmapId = "" + category + "." + position;
mBitmap = sBitmapCache.get(bitmapId);

if (mBitmap == null) {
// It's not in the cache, so load the Bitmap and add it to the cache.
// DANGER! We add items to this cache without ever removing any.
mBitmap = Directory.getCategory(category).getEntry(position)
.getBitmap(getResources());
sBitmapCache.put(bitmapId, mBitmap);
}
((ImageView) getView().findViewById(R.id.image)).setImageBitmap(mBitmap);
}

I've deliberately introduced a memory leak here: we add Bitmaps to the cache without ever removing them. In a real app, we'd probably want to limit the size of the cache in some way.

Examining heap usage in DDMS

The Dalvik Debug Monitor Server (DDMS) is one of the primary Android debugging tools. DDMS is part of the ADT Eclipse plug-in, and a standalone version can also be found in the tools/ directory of the Android SDK. For more information on DDMS, see Using DDMS.

Let's use DDMS to examine the heap usage of this app. You can start up DDMS in one of two ways:

  • from Eclipse: click Window > Open Perspective > Other... > DDMS
  • or from the command line: run ddms (or ./ddms on Mac/Linux) in the tools/ directory

Select the process com.example.android.hcgallery in the left pane, and then click the Show heap updates button in the toolbar. Then, switch to the VM Heap tab in DDMS. It shows some basic stats about our heap memory usage, updated after every GC. To see the first update, click the Cause GC button.

We can see that our live set (the Allocated column) is a little over 8MB. Now flip through the photos, and watch that number go up. Since there are only 13 photos in this app, the amount of memory we leak is bounded. In some ways, this is the worst kind of leak to have, because we never get an OutOfMemoryError indicating that we are leaking.

Creating a heap dump

Let's use a heap dump to track down the problem. Click the Dump HPROF file button in the DDMS toolbar, choose where you want to save the file, and then run hprof-conv on it. In this example, I'll be using the standalone version of MAT (version 1.0.1), available from the MAT download site.

If you're running ADT (which includes a plug-in version of DDMS) and have MAT installed in Eclipse as well, clicking the “dump HPROF” button will automatically do the conversion (using hprof-conv) and open the converted hprof file into Eclipse (which will be opened by MAT).

Analyzing heap dumps using MAT

Start up MAT and load the converted HPROF file we just created. MAT is a powerful tool, and it's beyond the scope of this article to explain all it's features, so I'm just going to show you one way you can use it to detect a leak: the Histogram view. The Histogram view shows a list of classes sortable by the number of instances, the shallow heap (total amount of memory used by all instances), or the retained heap (total amount of memory kept alive by all instances, including other objects that they have references to).

If we sort by shallow heap, we can see that instances of byte[] are at the top. As of Android 3.0 (Honeycomb), the pixel data for Bitmap objects is stored in byte arrays (previously it was not stored in the Dalvik heap), and based on the size of these objects, it's a safe bet that they are the backing memory for our leaked bitmaps.

Right-click on the byte[] class and select List Objects > with incoming references. This produces a list of all byte arrays in the heap, which we can sort based on Shallow Heap usage.

Pick one of the big objects, and drill down on it. This will show you the path from the root set to the object -- the chain of references that keeps this object alive. Lo and behold, there's our bitmap cache!

MAT can't tell us for sure that this is a leak, because it doesn't know whether these objects are needed or not -- only the programmer can do that. In this case, the cache is using a large amount of memory relative to the rest of the application, so we might consider limiting the size of the cache.

Comparing heap dumps with MAT

When debugging memory leaks, sometimes it's useful to compare the heap state at two different points in time. To do this, you'll need to create two separate HPROF files (don't forget to convert them using hprof-conv).

Here's how you can compare two heap dumps in MAT (it's a little complicated):

  1. Open the first HPROF file (using File > Open Heap Dump).
  2. Open the Histogram view.
  3. In the Navigation History view (use Window > Navigation History if it's not visible), right click on histogram and select Add to Compare Basket.
  4. Open the second HPROF file and repeat steps 2 and 3.
  5. Switch to the Compare Basket view, and click Compare the Results (the red "!" icon in the top right corner of the view).

Conclusion

In this article, I've shown how the Allocation Tracker and heap dumps can give you get a better sense of your application's memory usage. I also showed how The Eclipse Memory Analyzer (MAT) can help you track down memory leaks in your app. MAT is a powerful tool, and I've only scratched the surface of what you can do with it. If you'd like to learn more, I recommend reading some of these articles:

Explore the world with updated apps for iPhone: Check in with Latitude and use Places in 30 languages

We’re happy to announce updates for two iPhone apps that help you connect the people you care about with the places you love: Google Latitude with check-ins and Google Places in 30 languages.

Check in with Google Latitude for iPhone
After adding check-ins to Google Latitude for Android-powered devices, we’re happy to announce that you can now start checking in at places with the updated Latitude app for iPhone.

With Google Latitude, you can see where your Latitude friends are on a map and choose to continuously share where you are. Now, you can also choose to check in at specific places, such as your favorite restaurant or a park, to add more context to your location. You'll be able to not only let friends know that you’re just around the corner but also let them know the actual coffee shop that you’re at in case they want to join you. If Latitude is set to continuously update your location, you’ll also be automatically checked out when you leave. This way, friends aren’t left guessing if you’re still there or not before heading over to join you for a latte.


Tap the “Check in” button to start checking in at nearby places. Keep checking in every time you visit your favorite places to start gaining special status there. You’ll not only progress to become a Regular, VIP, and then Guru at your favorite places, but if you’re near Austin, Texas, gaining status lets you unlock check-in offers at over 60 places.

Just like with sharing your location, you can control your Latitude check-in privacy. Checking in is 100% opt-in, and you can choose to share any check-in with your friends on Latitude, publicly on the web and your Google profile, or just yourself.

To start checking in with Latitude on your iPhone, update the Latitude app from the App Store. The app requires iOS 4 and above, and it's available for iPhone 3GS, iPhone 4, iPad, and iPod touch (3rd/4th generation). However, background location updating is only available on the iPhone 3GS, iPhone 4, and iPad 3G.

Google Places in 30 languages
Best ever! Me gusta! Mi piace! Ich liebe es! Wherever you are and whatever language you speak, we want to give you the best personalized place recommendations when you use Google Places with Hotpot. Update the Google Places app from the App Store to rate on the go and get personalized recommendations for places in 30 languages.


You’ll also have one more way to personalize your experience: saved places. Sign in with your Google Account using the info icon in the top left corner. Then, tap the new “Saved” icon on the app’s main screen to see all the places that you’ve saved or starred from the app, google.com/hotpot or maps.google.com.

Updates will appear in the App Store in supported countries throughout today. Get the latest version of Google Places from the App Store and start discovering great new places wherever you are!

Introducing Nexus S 4G for Sprint

Recently, we introduced Nexus S from Google, the first phone to run Android 2.3, Gingerbread. In addition to the UMTS-capable Nexus S, today we’re introducing Nexus S 4G from Google, available for Sprint. Nexus S 4G is part of the Nexus line of devices which provide a pure Google experience and run the latest and greatest Android releases and Google mobile apps.

We co-developed Nexus S 4G with Samsung to tightly integrate hardware and software and highlight the advancements of Gingerbread. Nexus S 4G takes advantage of Sprint’s high-speed 4G data network. It features a 4” Contour Display designed to fit comfortably in the palm of your hand and along the side of your face. It also features a 1GHz Hummingbird processor, front and rear facing cameras, 16GB of internal memory, and NFC (near field communication) hardware that lets you read information from everyday objects that have NFC tags.

In addition, today we’re excited to announce that Sprint customers will soon be able to take advantage of the full set of Google Voice features without changing or porting their number.

You can find more Nexus S information and videos at google.com/nexus or follow @googlenexus on Twitter for the latest updates. Nexus S 4G can be purchased this spring online and in-store from Sprint retailers and Best Buy and Best Buy Mobile stores in the U.S.


Posted by Andy Rubin, VP of Engineering

Application Stats on Android Market

[This post is by Eric Chu, Android Developer Ecosystem. —Dirk Dougherty]

On the Android Market team, it’s been our goal to bring you improved ways of seeing and understanding the installation performance of your published applications. We know that this information is critical in helping you tune your development and marketing efforts. Today I’m pleased to let you know about an important new feature that we’ve added to Android Market called Application Statistics.

Application Statistics is a new type of dashboard in the Market Developer Console that gives you an overview of the installation performance of your apps. It provides charts and tables that summarize each app’s active installation trend over time, as well as its distribution across key dimensions such as Android platform versions, devices, user countries, and user languages. For additional context, the dashboard also shows the comparable aggregate distribution for all app installs from Android Market (numbering in the billions). You could use this data to observe how your app performs relative to the rest of Market or decide what to develop next.


To start with, we’ve seeded the application Statistics dashboards with data going back to December 22, 2010. Going forward, we’ll be updating the data daily.

We encourage you to check out these new dashboards and we hope they’ll give you new and useful insights into your apps’ installation performance. You can access the Statistics dashboards from the main Listings page in the Developer Console.

Watch for more announcements soon. We are continuing to work hard to deliver more reporting features to help you manage your products successfully on Android Market.

Google Search app for iPhone—a new name and a new look

If you need to do a Google search on your iPhone or iPod touch it's now faster and easier when you use our redesigned Google Search app, formerly Google Mobile App. If you've been using Google Mobile App for a while, you'll notice that things look different.

The redesigned home screen of Google Search app.


First, you’ll see that there are now more ways to interact with the app. When browsing through search results or looking at a webpage, you can swipe down to see the search bar or change your settings. For those who use other Google apps, there’s an Apps button at the bottom of the screen for rapid access to the mobile versions of our products.

We also included a new toolbar that will make it easier for you to filter your results. You can open this toolbar by swiping from left to right — either before you search or once you’ve got your results. If you only want images, just tap “Images,” and the results will update as shown:


The toolbar helps you to get to the right kind of results.

Second, we’ve made it easier to pick up searching where you left off. If you leave the app and come back later, you’ll be able either to start a new search right away (just tap in the search box to type, hit the microphone button to do a voice search or tap on the camera icon to use Google Goggles) or get back to exactly where you were by tapping on the lower part of the page.

Finally, there are a number of improvements we’ve made to everything else you love in the app, including Google Goggles, Voice Search, Search with My Location, Gmail unread counts and more. There's a lot in the app, so we've added a simple help feature to let you explore it. Access this by tapping the question mark above the Google logo.

The help screen can be accessed from anywhere in Google Search app.




Download and try Google Search app today; it’s available free from the iTunes App Store. You can also scan the QR code below.


Android 3.0 Hardware Acceleration

[This post is by Romain Guy, who likes things on your screen to move fast. —Tim Bray]

One of the biggest changes we made to Android for Honeycomb is the addition of a new rendering pipeline so that applications can benefit from hardware accelerated 2D graphics. Hardware accelerated graphics is nothing new to the Android platform, it has always been used for windows composition or OpenGL games for instance, but with this new rendering pipeline applications can benefit from an extra boost in performance. On a Motorola Xoom device, all the standard applications like Browser and Calendar use hardware-accelerated 2D graphics.

In this article, I will show you how to enable the hardware accelerated 2D graphics pipeline in your application and give you a few tips on how to use it properly.

Go faster

To enable the hardware accelerated 2D graphics, open your AndroidManifest.xml file and add the following attribute to the <application /> tag:

    android:hardwareAccelerated="true"

If your application uses only standard widgets and drawables, this should be all you need to do. Once hardware acceleration is enabled, all drawing operations performed on a View's Canvas are performed using the GPU.

If you have custom drawing code you might need to do a bit more, which is in part why hardware acceleration is not enabled by default. And it's why you might want to read the rest of this article, to understand some of the important details of acceleration.

Controlling hardware acceleration

Because of the characteristics of the new rendering pipeline, you might run into issues with your application. Problems usually manifest themselves as invisible elements, exceptions or different-looking pixels. To help you, Android gives you 4 different ways to control hardware acceleration. You can enable or disable it on the following elements:

  • Application

  • Activity

  • Window

  • View

To enable or disable hardware acceleration at the application or activity level, use the XML attribute mentioned earlier. The following snippet enables hardware acceleration for the entire application but disables it for one activity:

    <application android:hardwareAccelerated="true">
<activity ... />
<activity android:hardwareAccelerated="false" />
</application>

If you need more fine-grained control, you can enable hardware acceleration for a given window at runtime:

    getWindow().setFlags(
WindowManager.LayoutParams.FLAG_HARDWARE_ACCELERATED,
WindowManager.LayoutParams.FLAG_HARDWARE_ACCELERATED);

Note that you currently cannot disable hardware acceleration at the window level. Finally, hardware acceleration can be disabled on individual views:

    view.setLayerType(View.LAYER_TYPE_SOFTWARE, null);

Layer types have many other usages that will be described later.

Am I hardware accelerated?

It is sometimes useful for an application, or more likely a custom view, to know whether it currently is hardware accelerated. This is particularly useful if your application does a lot of custom drawing and not all operations are properly supported by the new rendering pipeline.

There are two different ways to check whether the application is hardware accelerated:

If you must do this check in your drawing code, it is highly recommended to use Canvas.isHardwareAccelerated() instead of View.isHardwareAccelerated(). Indeed, even when a View is attached to a hardware accelerated window, it can be drawn using a non-hardware accelerated Canvas. This happens for instance when drawing a View into a bitmap for caching purpose.

What drawing operations are supported?

The current hardware accelerated 2D pipeline supports the most commonly used Canvas operations, and then some. We implemented all the operations needed to render the built-in applications, all the default widgets and layouts, and common advanced visual effects (reflections, tiled textures, etc.) There are however a few operations that are currently not supported, but might be in a future version of Android:

  • Canvas
    • clipPath

    • clipRegion

    • drawPicture

    • drawPoints

    • drawPosText

    • drawTextOnPath

    • drawVertices


  • Paint
    • setLinearText

    • setMaskFilter

    • setRasterizer


In addition, some operations behave differently when hardware acceleration enabled:

  • Canvas
    • clipRect: XOR, Difference and ReverseDifference clip modes are ignored; 3D transforms do not apply to the clip rectangle

    • drawBitmapMesh: colors array is ignored

    • drawLines: anti-aliasing is not supported

    • setDrawFilter: can be set, but ignored


  • Paint
    • setDither: ignored

    • setFilterBitmap: filtering is always on

    • setShadowLayer: works with text only


  • ComposeShader
    • A ComposeShader can only contain shaders of different types (a BitmapShader and a LinearGradientShader for instance, but not two instances of BitmapShader)

    • A ComposeShader cannot contain a ComposeShader


If drawing code in one of your views is affected by any of the missing features or limitations, you don't have to miss out on the advantages of hardware acceleration for your overall application. Instead, consider rendering the problematic view into a bitmap or setting its layer type to LAYER_TYPE_SOFTWARE. In both cases, you will switch back to the software rendering pipeline.

Dos and don'ts

Switching to hardware accelerated 2D graphics is a great way to get smoother animations and faster rendering in your application but it is by no means a magic bullet. Your application should be designed and implemented to be GPU friendly. It is easier than you might think if you follow these recommendations:

  • Reduce the number of Views in your application: the more Views the system has to draw, the slower it will be. This applies to the software pipeline as well; it is one of the easiest ways to optimize your UI.

  • Avoid overdraw: always make sure that you are not drawing too many layers on top of each other. In particular, make sure to remove any Views that are completely obscured by other opaque views on top of it. If you need to draw several layers blended on top of each other consider merging them into a single one. A good rule of thumb with current hardware is to not draw more than 2.5 times the number of pixels on screen per frame (and transparent pixels in a bitmap count!)

  • Don't create render objects in draw methods: a common mistake is to create a new Paint, or a new Path, every time a rendering method is invoked. This is not only wasteful, forcing the system to run the GC more often, it also bypasses caches and optimizations in the hardware pipeline.

  • Don't modify shapes too often: complex shapes, paths and circles for instance, are rendered using texture masks. Every time you create or modify a Path, the hardware pipeline must create a new mask, which can be expensive.

  • Don't modify bitmaps too often: every time you change the content of a bitmap, it needs to be uploaded again as a GPU texture the next time you draw it.

  • Use alpha with care: when a View is made translucent using View.setAlpha(), an AlphaAnimation or an ObjectAnimator animating the “alpha” property, it is rendered in an off-screen buffer which doubles the required fill-rate. When applying alpha on very large views, consider setting the View's layer type to LAYER_TYPE_HARDWARE.

View layers

Since Android 1.0, Views have had the ability to render into off-screen buffers, either by using a View's drawing cache, or by using Canvas.saveLayer(). Off-screen buffers, or layers, have several interesting usages. They can be used to get better performance when animating complex Views or to apply composition effects. For instance, fade effects are implemented by using Canvas.saveLayer() to temporarily render a View into a layer and then compositing it back on screen with an opacity factor.

Because layers are so useful, Android 3.0 gives you more control on how and when to use them. To to so, we have introduced a new API called View.setLayerType(int type, Paint p). This API takes two parameters: the type of layer you want to use and an optional Paint that describes how the layer should be composited. The paint parameter may be used to apply color filters, special blending modes or opacity to a layer. A View can use one of 3 layer types:

  • LAYER_TYPE_NONE: the View is rendered normally, and is not backed by an off-screen buffer.

  • LAYER_TYPE_HARDWARE: the View is rendered in hardware into a hardware texture if the application is hardware accelerated. If the application is not hardware accelerated, this layer type behaves the same as LAYER_TYPE_SOFTWARE.

  • LAYER_TYPE_SOFTWARE: the View is rendered in software into a bitmap

The type of layer you will use depends on your goal:

  • Performance: use a hardware layer type to render a View into a hardware texture. Once a View is rendered into a layer, its drawing code does not have to be executed until the View calls invalidate(). Some animations, for instance alpha animations, can then be applied directly onto the layer, which is very efficient for the GPU to do.

  • Visual effects: use a hardware or software layer type and a Paint to apply special visual treatments to a View. For instance, you can draw a View in black and white using a ColorMatrixColorFilter.

  • Compatibility: use a software layer type to force a View to be rendered in software. This is an easy way to work around limitations of the hardware rendering pipeline.

Layers and animations

Hardware-accelerated 2D graphics help deliver a faster and smoother user experience, especially when it comes to animations. Running an animation at 60 frames per second is not always possible when animating complex views that issue a lot of drawing operations. If you are running an animation in your application and do not obtain the smooth results you want, consider enabling hardware layers on your animated views.

When a View is backed by a hardware layer, some of its properties are handled by the way the layer is composited on screen. Setting these properties will be efficient because they do not require the view to be invalidated and redrawn. Here is the list of properties that will affect the way the layer is composited; calling the setter for any of these properties will result in optimal invalidation and no redraw of the targeted View:

  • alpha: to change the layer's opacity

  • x, y, translationX, translationY: to change the layer's position

  • scaleX, scaleY: to change the layer's size

  • rotation, rotationX, rotationY: to change the layer's orientation in 3D space

  • pivotX, pivotY: to change the layer's transformations origin

These properties are the names used when animating a View with an ObjectAnimator. If you want to set/get these properties, call the appropriate setter or getter. For instance, to modify the alpha property, call setAlpha(). The following code snippet shows the most efficient way to rotate a View in 3D around the Y axis:

    view.setLayerType(View.LAYER_TYPE_HARDWARE, null);
ObjectAnimator.ofFloat(view, "rotationY", 180).start();

Since hardware layers consume video memory, it is highly recommended you enable them only for the duration of the animation. This can be achieved with animation listeners:

    view.setLayerType(View.LAYER_TYPE_HARDWARE, null);
ObjectAnimator animator = ObjectAnimator.ofFloat(
view, "rotationY", 180);
animator.addListener(new AnimatorListenerAdapter() {
@Override
public void onAnimationEnd(Animator animation) {
view.setLayerType(View.LAYER_TYPE_NONE, null);
}
});
animator.start();

New drawing model

Along with hardware-accelerated 2D graphics, Android 3.0 introduces another major change in the UI toolkit’s drawing model: display lists, which are only enabled when hardware acceleration is turned on. To fully understand display lists and how they may affect your application it is important to also understand how Views are drawn.

Whenever an application needs to update a part of its UI, it invokes invalidate() (or one of its variants) on any View whose content has changed. The invalidation messages are propagated all the way up the view hierarchy to compute the dirty region; the region of the screen that needs to be redrawn. The system then draws any View in the hierarchy that intersects with the dirty region. The drawing model is therefore made of two stages:

  1. Invalidate the hierarchy

  2. Draw the hierarchy

There are unfortunately two drawbacks to this approach. First, this drawing model requires execution of a lot of code on every draw pass. Imagine for instance your application calls invalidate() on a button and that button sits on top of a more complex View like a MapView. When it comes time to draw, the drawing code of the MapView will be executed even though the MapView itself has not changed.

The second issue with that approach is that it can hide bugs in your application. Since views are redrawn anytime they intersect with the dirty region, a View whose content you changed might be redrawn even though invalidate() was not called on it. When this happens, you are relying on another View getting invalidated to obtain the proper behavior. Needless to say, this behavior can change every time you modify your application ever so slightly. Remember this rule: always call invalidate() on a View whenever you modify data or state that affects this View’s drawing code. This applies only to custom code since setting standard properties, like the background color or the text in a TextView, will cause invalidate() to be called properly internally.

Android 3.0 still relies on invalidate() to request screen updates and draw() to render views. The difference is in how the drawing code is handled internally. Rather than executing the drawing commands immediately, the UI toolkit now records them inside display lists. This means that display lists do not contain any logic, but rather the output of the view hierarchy’s drawing code. Another interesting optimization is that the system only needs to record/update display lists for views marked dirty by an invalidate() call; views that have not been invalidated can be redrawn simply by re-issuing the previously recorded display list. The new drawing model now contains 3 stages:

  1. Invalidate the hierarchy

  2. Record/update display lists

  3. Draw the display lists

With this model, you cannot rely on a View intersecting the dirty region to have its draw() method executed anymore: to ensure that a View’s display list will be recorded, you must call invalidate(). This kind of bug becomes very obvious with hardware acceleration turned on and is easy to fix: you would see the previous content of a View after changing it.

Using display lists also benefits animation performance. In the previous section, we saw that setting specific properties (alpha, rotation, etc.) does not require invalidating the targeted View. This optimization also applies to views with display lists (any View when your application is hardware accelerated.) Let’s imagine you want to change the opacity of a ListView embedded inside a LinearLayout, above a Button. Here is what the (simplified) display list of the LinearLayout looks like before changing the list’s opacity:

    DrawDisplayList(ListView)
DrawDisplayList(Button)

After invoking listView.setAlpha(0.5f) the display list now contains this:

    SaveLayerAlpha(0.5)
DrawDisplayList(ListView)
Restore
DrawDisplayList(Button)

The complex drawing code of ListView was not executed. Instead the system only updated the display list of the much simpler LinearLayout. In previous versions of Android, or in an application without hardware acceleration enabled, the drawing code of both the list and its parent would have to be executed again.

It’s your turn

Enabling hardware accelerated 2D graphics in your application is easy, particularly if you rely solely on standard views and drawables. Just keep in mind the few limitations and potential issues described in this document and make sure to thoroughly test your application!

Renderscript Part 2

[This post is by R. Jason Sams, an Android engineer who specializes in graphics, performance tuning, and software architecture. —Tim Bray]

In Introducing Renderscript I gave a brief overview of this technology. In this post I’ll look at “compute” in more detail. In Renderscript we use “compute” to mean offloading of data processing from Dalvik code to Renderscript code which may run on the same or different processor(s).

Renderscript’s Design Goals

Renderscript has three primary goals, given here from most to least important.

Portability: Application code needs to be able to run across all devices, even those with radically different hardware. ARM currently comes in several variants — with and without VFP, with and without NEON, and with various register counts. Beyond ARM, there are other CPU architectures like x86, several GPU architectures, and even more DSP architectures.

Performance: The second objective is to get as much performance as possible within the constraints of Portability. For Renderscript to make sense we need to achieve much greater performance than established solutions.

Usability: The third goal is to simplify development as much as possible. Where possible we automate steps to avoid glue code and other developer busy work.

Those three goals lead to several design trade-offs. It’s these trade-offs that separate Renderscript from the existing approaches on the device, such as Dalvik or the NDK. They should be thought of as different tools intended to solve different problems.

Core Design Choices

The first choice that needed to be made is what language should be used. When it comes to languages there are almost unlimited options. Shading style languages, C, and C++ were considered. In the end the shading style languages were ruled out due to the need to be able to manipulate data structures for graphical applications such as scene graphs. The lack of pointers and recursion were considered crippling limitations for usability. C++ on the other hand was very desirable but ran into issues with portability. The advanced C++ features are very difficult to run on non-cpu hardware. In the end we chose to base Renderscript on C99 because it offers equal performance to the other choices, is very well understood by developers, and poses no issues running on a wide range of hardware.

The next design trade-off was work flow. Specifically we focused on how to convert source code to machine code. We explored several options and actually implemented two different solutions during the development of Renderscript. The older versions (Eclair through Gingerbread) compiled the C source code all the way to machine code on the device. While this had some nice properties such as the ability for applications to generate source on the fly it turned out to be a usability problem. Having to compile your app, install it, run it, then find your syntax error was painful. Also the weaker CPU in devices limited the static analysis and scope of optimizations that could be done.

Then we switched to LLVM, moving to a model where scripts are compiled and analyzed on the host using a modified version of clang. We perform high level optimizations at this stage, then emit LLVM bitcode. The translation of the intermediate bitcode to machine code still happens on the device (along with additional device-specific optimizations).

Our last big trade-off for compute was thread launching. The trade-off here is between performance and portability. Given sufficient knowledge, existing compute solutions allow a developer to tune an application for a specific hardware platform to the detriment of others. Given unlimited time and resources developers could tune for every hardware combination. While testing and tuning a variety of devices is never bad, no amount of work allows them to tune for unreleased hardware they don’t yet have. A more portable solution places the tuning burden on the runtime, providing greater average performance at the cost of peak performance. Given that the number one goal was portability we chose to place this burden on the runtime.

A secondary effect of choosing runtime thread-launch management is that dynamic decisions can be made about where to run a script. For example, some compute hardware can support pointers and recursion while others cannot. We could have chosen to disallow these things and give developers a lowest common denominator API, but we chose to instead let the runtime analyze the scripts. This allows developers to get full use of hardware that supports these features, since there is always a fully featured CPU to fall back upon. In the end, developers can focus on writing good apps and the hardware manufacturers can compete on making the most fully featured and efficient hardware. As new features appear, applications will benefit without application code changes.

Usability was a major driver in Renderscript’s design. Most existing compute and graphics platforms require elaborate glue logic to tie the high performance code back to the core application code. This code is very bug prone and usually painful to write. The static analysis we do in the host Renderscript compiler is helpful in solving this issue. Each user script generates a Dalvik “glue” class. Names for the glue class and its accessors are derived from the contents of the script. This greatly simplifies the use of the scripts from Dalvik.

Example: The Application Level

Given these trade-offs, what does a simple compute application look like? In this very basic example we will take a normal android.graphics.Bitmap object and run a script that copies it to a second bitmap, converting it to monochrome along the way. Let’s look at the application code which invokes the script before we look at the script itself; this comes from the HelloCompute SDK sample:

    private Bitmap mBitmapIn;
private Bitmap mBitmapOut;
private RenderScript mRS;
private Allocation mInAllocation;
private Allocation mOutAllocation;
private ScriptC_mono mScript;

private void createScript() {
mRS = RenderScript.create(this);

mInAllocation = Allocation.createFromBitmap(mRS, mBitmapIn,
Allocation.MipmapControl.MIPMAP_NONE,
Allocation.USAGE_SCRIPT);
mOutAllocation = Allocation.createTyped(mRS, mInAllocation.getType());

mScript = new ScriptC_mono(mRS, getResources(), R.raw.mono);

mScript.set_gIn(mInAllocation);
mScript.set_gOut(mOutAllocation);
mScript.set_gScript(mScript);
mScript.invoke_filter();
mOutAllocation.copyTo(mBitmapOut);
}

This function assumes that the two bitmaps have already been created and are of the same size and format.

The first thing all Renderscript applications need is a context object. This is the core object used to create and manage all other Renderscript objects. This first line of the function creates the object, mRS. This object must be kept alive for the duration the application intends to use it or any objects created with it.

The next two function calls create compute allocations from the Bitmaps. Renderscript has its own memory allocator, because the memory may potentially be shared by multiple processors and possibly exist in more than one memory space. When an allocation is created its potential uses need to be enumerated so the system may choose the correct type of memory for its intended uses.

The first function createFromBitmap() creates a matching Renderscript allocation object and copies the contents of the bitmap into the allocation. Allocations are the basic units of memory used in renderscript. The second Allocation created with createTyped() generates an Allocation identical in structure to the first. The definition of that structure is retrieved from the first with the getType() query. Renderscript types define the structure of an Allocation. In this case the type was generated from the height, width, and format of the incoming bitmap.

The next line loads the script, which is named “mono.rs”. R.raw.mono identifies it; scripts are stored as raw resources in an application’s APK. Note the name of the auto-generated “glue” class, ScriptC_mono.

The next three lines set properties of the script, using generated methods in the “glue” class.

Now we have everything set up. The function invoke_filter() actually does some work for us. This causes the function filter() in the script to be called. If the function had parameters they could be passed here. Return values are not allowed as invocations are asynchronous.

The last line of the function copies the result of our compute script back to the managed bitmap; it has the necessary synchronization code built-in to ensure the script has completed running.

Example: The Script

Here’s the Renderscript stored in mono.rs which the application code above invokes:

#pragma version(1)
#pragma rs java_package_name(com.android.example.hellocompute)

rs_allocation gIn;
rs_allocation gOut;
rs_script gScript;

const static float3 gMonoMult = {0.299f, 0.587f, 0.114f};

void root(const uchar4 *v_in, uchar4 *v_out, const void *usrData, uint32_t x, uint32_t y) {
float4 f4 = rsUnpackColor8888(*v_in);

float3 mono = dot(f4.rgb, gMonoMult);
*v_out = rsPackColorTo8888(mono);
}

void filter() {
rsForEach(gScript, gIn, gOut, 0);
}

The first line is simply an indication to the compiler which revision of the native Renderscript API the script is written against. The second line controls the package association of the generated reflected code.

The three globals listed correspond to the globals which were set up in our managed code. The fourth global is not reflected because it is marked as static. Non-static, const, globals are also allowed but only generate a get reflected method. This can be useful for keeping constants in sync between scripts and managed code.

The function root() is special to renderscript. Conceptually it’s similar to main() in C. When a script is invoked by the runtime, this is the function that will be called. In this case the parameters are the incoming and outgoing pixels from our allocation. A generic user pointer is also provided along with the address within the allocation that this invocation is processing. This example ignores these parameters.

The three lines of the root function unpack the pixel from RGBA_8888 to a vector of four floats. The second line uses a built-in math function to compute the dot product of the monochrome constants with the incoming pixel data to get our grey level. Note that while dot returns a single float it can be assigned to a float3 which simply copies the value to each of the x, y, and z components of the float3. In the end we use another builtin to repack the floats back to a 32 bit pixel. This is also an example of an overloaded function as there are separate versions of rsPackColorTo8888 which take RGB (float3) or RGBA (float4) data. If A is not provided the overloaded functions assume a value of 1.0f.

The filter() function is called from managed code to do the conversion. It simply does a compute launch on each element of the allocation. The first parameter is the script to be launched - the root function of this script will be invoked for each element in the allocation. The second and third parameters are the input and output data allocations. The last parameter is the pointer for the user data if we desired to pass additional information to the root function.

The forEach will launch across multiple threads if the device has multiple processors. In the future forEach can provide a transition point where control may pass from one processor to another. In this example it is reasonable to expect that in the future filter() would get executed on the CPU and root() would occur on a GPU or DSP.

I hope this gives glimpse into the design behind Renderscript and a simple example of how it can be used.

Click-to-call emergency information

(Cross-posted from the Google.org blog)

In November 2010, we began displaying relevant emergency phone numbers at the top of the results page for searches around poison control, suicide and other common emergencies in 14 countries. Today, we are making it even easier for you to quickly reach the help you may need by adding click-to-call capabilities for all of these emergency information search results.

We piggybacked on the way that our mobile ads team enabled click-to-call phone numbers in local ads on mobile devices. This capability enables businesses to make it even easier for customers to reach them when those customers search on Internet-enabled mobile devices. The functionality seemed ideal for the emergency information feature.

Previously, mobile users in one of these countries who conducted searches around poison control, suicide and common emergency numbers received a result showing the relevant emergency phone number.

People on mobile will now get the same result, but the phone number will be a link that allows you to dial the number instantly, just by clicking the link.


Now, the poison control result in Spain is click-to-call on a mobile phone

We hope this addition is a small step that helps connect people with crucial information that they need immediately.

Instant Previews now available on mobile

(Cross-posted from the Official Google Blog)

Instant Previews provides a fast and interactive way to evaluate search results. Starting today, Google Instant Previews is available on mobile for Android (2.2+) and iOS (4.0+) devices across 38 languages. Similar to the desktop version of Instant Previews, you can visually compare search results from webpage snapshots, making it easier to choose the right result faster, especially when you have an idea of the content you’d like to see.

For example, if you’re looking for a webpage that has both photos and descriptions, you can use Instant Previews to quickly identify these pages by navigating across the visual search results with a few swipes of your finger. Or perhaps you’re looking for an article, a step-by-step instructions list, or a product comparison chart—with Instant Previews, you can easily spot pages with the right content without having to navigate back and forth between websites and search results. And when the mobile version of a website is available, we’ll show you a preview of the mobile page.





To use Instant Previews on your mobile device, do a search on www.google.com and tap on the magnifying glass next to any search result. A side-by-side comparison view of the webpage previews for the first page of search results will appear. When you find a result you like, tap on the preview to go straight to the website. It’s as easy as finding a recipe for poaching an egg:




You can learn more about Instant Previews for mobile in our Help Center. We hope that you enjoy finding the right result faster with Instant Previews!

You’ve got better things to do than wait in traffic

Ever been stuck in traffic, only to find out you’d have been better off going a bit out of your way to take a less congested route? If you’re like me, you probably hear the traffic report telling you what you already know: traffic is bad on the road you’re currently on, and you should have taken another. It doesn’t need to be this way, and we want to help. So we’re happy to announce that Google Maps Navigation (Beta) will now automatically route you around traffic. With more than 35 million miles driven by Navigation users every day, this should add up to quite a bit of time saved!

On a recent trip to New York, I was running late to meet some friends at the Queens Museum of Art. I had no idea that there was a traffic jam along the route I would normally have taken. Thankfully, Navigation routed me around traffic. I didn’t even have to know that there was a traffic jam on I-495, and I got to enjoy a much faster trip on I-278 instead.

Navigation now uses real-time traffic conditions to automatically route you around traffic.

You don’t have to do anything to be routed around traffic; just start Navigation like you normally would, either from the Navigation app or from within Google Maps. Before today, Navigation would choose whichever route was fastest, without taking current traffic conditions into account. It would also generate additional alternate directions, such as the shortest route or one that uses highways instead of side roads. Starting today, our routing algorithms will also apply our knowledge of current and historical traffic to select the fastest route from those alternates. That means that Navigation will automatically guide you along the best route given the current traffic conditions.

Not only can you save time and fuel, you’re making traffic better for everyone else by avoiding traffic jams. Keep in mind that we can’t guarantee that Navigation will be able to find a faster way, but it will always try to get you where you’re going as fast as possible.

You can begin routing around traffic with Google Maps Navigation for Android in North America and Europe where both Navigation and real-time traffic data are available.

Enjoy your newly found free time!

An Update on Android Market Security

On Tuesday evening, the Android team was made aware of a number of malicious applications published to Android Market. Within minutes of becoming aware, we identified and removed the malicious applications. The applications took advantage of known vulnerabilities which don’t affect Android versions 2.2.2 or higher. For affected devices, we believe that the only information the attacker(s) were able to gather was device-specific (IMEI/IMSI, unique codes which are used to identify mobile devices, and the version of Android running on your device). But given the nature of the exploits, the attacker(s) could access other data, which is why we’ve taken a number of steps to protect those who downloaded a malicious application:

  1. We removed the malicious applications from Android Market, suspended the associated developer accounts, and contacted law enforcement about the attack.
  2. We are remotely removing the malicious applications from affected devices. This remote application removal feature is one of many security controls the Android team can use to help protect users from malicious applications.
  3. We are pushing an Android Market security update to all affected devices that undoes the exploits to prevent the attacker(s) from accessing any more information from affected devices. If your device has been affected, you will receive an email from android-market-support@google.com over the next 72 hours. You will also receive a notification on your device that “Android Market Security Tool March 2011” has been installed. You may also receive notification(s) on your device that an application has been removed. You are not required to take any action from there; the update will automatically undo the exploit. Within 24 hours of the exploit being undone, you will receive a second email.
  4. We are adding a number of measures to help prevent additional malicious applications using similar exploits from being distributed through Android Market and are working with our partners to provide the fix for the underlying security issues.

For more details, please visit the Android Market Help Center. We always encourage you to check the list of permissions when installing an application from Android Market. Security is a priority for the Android team, and we’re committed to building new safeguards to help prevent these kinds of attacks from happening in the future.


Rich Cannings, Android Security Lead

Fragments For All

[This post is by Xavier Ducrohet, Android SDK Tech Lead. — Tim Bray]

A few weeks ago, Dianne Hackborn wrote about the new Fragments API, a mechanism that makes it easier for applications to scale across a variety of screen sizes.

However, as Dianne noted, this new API, which is part of Honeycomb, does not help developers whose applications target earlier versions of Android.



Today we’ve released a static library that exposes the same Fragments API (as well as the new LoaderManager and a few other classes) so that applications compatible with Android 1.6 or later can use fragments to create tablet-compatible user interfaces.

This library is available through the SDK Updater; it’s called “Android Compatibility package”.

Tweet your Hotpot ratings in Google Maps for Android

(Cross-posted on the Hotpot Blog and the LatLong Blog.)


Whether it’s Google Places with Hotpot or Google Latitude, we’re working on helping you connect the people you care about with places you love. Now, when you’re rating your dinner spot using Google Maps for Android, you can share your review with even more people by posting it to Twitter.


Post your ratings and reviews to Twitter
When you rate and review places like restaurants or cafes from Google Places, you can share valuable recommendations with your Hotpot friends and across Google’s products – in search results, on google.com/hotpot, and on Place pages. But we wanted you to be able to share your recommendations even more broadly. So today, you can start sharing your ratings and reviews with your followers on Twitter directly from your Android-powered device.





When rating on the go using our rating widget, just choose to Post review to Twitter and connect your Twitter account. You’ll get a preview of your tweet and will be able to post your ratings and reviews moving forward.






Post your ratings and reviews to your Twitter followers.


Check-ins: ping friends and search for places

Starting last month, you could share information about the place you were at, in addition to your location, by checking in at places using Google Latitude. Starting today, if you see nearby Latitude friends on the map and want to ask them where they are, you can quickly “ping” them instead of having to text or call. They’ll receive an Android notification from you asking them to check in at a place. And when they check in using your request, you’ll get a notification right back so you know which place to go to meet up with them.





From a friend’s Latitude profile, ping them (left) and they’ll receive a notification (right).


You’ll also be able to more easily check yourself in at the right place. Sometimes there are a lot of nearby places around you, and the right one is missing from the suggested list of places to check in. You can now quickly search for the right place using the Search more places button.





Search for the right place to check in if it’s not among the suggested places.


To start posting Hotpot ratings to Twitter and pinging Latitude friends, just download Google Maps 5.2 from Android Market here (on Android OS 1.6+ devices) everywhere it’s already available. Please keep in mind that both Latitude friends need version 5.2 in order to use the new “ping” feature. Learn more in the Help Center.