Android App Development: Creating a Google Glass Application

In this post, we’ll go through how to create an application for sharing photos between Google Glass and Android Devices (such cell phones and tablets). For all of you Android App Developers, all the code can be found on the GitHub repository, and it’s open for access and improvements.

google glass

One of the main features that Google Glass lacks is the option to share photos between paired devices through Bluetooth. Keeping that in mind, the original intent of the application was adding the functionality to share a picture taken from the native camera using Bluetooth. Nevertheless, given that the GDK is on a very early stage, it doesn’t allow us to implement this kind of behavior. So the original idea had to be expanded, this is, showing a camera preview, take a photo, and share it through Bluetooth.

Given this, the application is composed by:

  1. Client Application in Google Glass.
  2. Server Application Service in an Android Device. This application is in charge of listening and accepting Bluetooth petition from Google Glasses.

Client application

This application will be in charge of the logic related to the Camera handling and the initial communication with the server device. Given this, the first screen that a user must see is a camera preview. This step is pretty much straightforward; a wrapping class extended from a SurfaceView will handle the entire task:

public class CameraView extends SurfaceView implements SurfaceHolder.Callback
            private SurfaceHolder surfaceHolder = null;
            private Camera camera = null;


            public CameraView(Context context)
                surfaceHolder = this.getHolder();


            public void surfaceCreated(SurfaceHolder holder)
                camera =;


                // Set the Hotfix for Google Glass


                // Show the Camera display
                catch (Exception e)


            public void surfaceChanged(SurfaceHolder holder, int format, int width, int height)
                // Start the preview for surfaceChanged
                if (camera != null)


            public void surfaceDestroyed(SurfaceHolder holder)
                // Do not hold the camera during surfaceDestroyed - view should be gone


            public void setCameraParameters(Camera camera)
                if (camera != null)
                                Parameters parameters = camera.getParameters();
                                parameters.setPreviewFpsRange(30000, 30000);


            public void releaseCamera()
                if (camera != null)
                                camera = null;

Which can be easily implemented within an activity using the “setContentView” method as follows:

public void onCreate(Bundle savedInstanceState) {


    // Initiate CameraView
    cameraView = new CameraView(this);


    // Turn on Gestures
    mGestureDetector = createGestureDetector(this);




This way a simple camera preview has been implemented in our activity. In order to take a picture, a single tap must be performed using the glass gesture panel, which can be implemented by using a GestureDetector:

glass code 1

Then overriding the Generic Motion Event on the activity we just return the gesture detector, which will perform the picture taking action:

Glass code 2 

As noted by the previous code, in order to get a picture from the Glass Camera, an intent must be called, which returns the URI of the picture taken after a confirmation. Nevertheless, even if the URI is returned, this doesn’t mean that the photo is available (this is a known issue of the GDK). So a workaround is to implement a file watcher, which will tell us when the Picture has been written to the source folder:

glass code 3

Just as a warning, sometimes the picture takes some time to be available (between 5 and 30 seconds) so don’t forget to implement a simple progress bar to tell the user that a process is being held.

After the picture has been written, we are now ready to send the picture file through Bluetooth. For this a new activity, it will show all the discovered Bluetooth devices using a scroll view, which can be navigated through the Glass Gesture Pad. To select a device a single tap will suffice.

The first task is implemented using a BroadcastReceiver, filtering by “BluetoothDevice.ACTION_FOUND” filter; which will constantly update our device list. Note that this method will only show the devices that are discoverable; so previous paired devices will not show up, even with Bluetooth on (this is a security measure). This task can be accomplished by the following code:

Set<BluetoothDevice> pairedDevices = myBt.getBondedDevices();
    if (pairedDevices != null && pairedDevices.size() > 0) {
        for (BluetoothDevice device : pairedDevices) {
            deviceName = (device.getName() != null) ? device.getName() : "Unnamed device";
            bDevice = new BluetoothDeviceModel(deviceName, device.getAddress());


Where “myBt” is the device’s default Bluetooth adapter.

Instead of using a List View, the GDK introduces a navigation class called CardScrollView, which uses an adapter of “Cards”, where all the discovered devices can be shown. This View uses the same logic as a simple ListView, so we can ignore this step. The next step is to select a desired device to send the taken picture. Luckily for us, the CardScrollView class implements the same OnItemClickListener interface as a ListView:

public void onItemClick(AdapterView<?> parent, View, int position, long id) {


    if (sendFileToDeviceTask != null) {
        Log.e(TAG, "Canceling old connection, and starting new one.");


    BluetoothDeviceModel bDevice = (BluetoothDeviceModel)
    Log.i(TAG, "Connecting to device: " + bDevice.getDeviceMACAddress());


    if(selectedFile == null)
        selectedFile = sendTestPicture();


    pref.edit().putString(SAVED_BT_DEVICE, bDevice.getDeviceMACAddress()).commit();



And the implementation is pretty much straight forward:

devicesScrollView = (CardScrollView) findViewById(;
bluetoothDevicesAdapter = new BluetoothDevicesAdapter(this);


After user tap, the Bluetooth client service is called, starting the communication with the server application. The function of this client is to keep a list of “ready to share” pictures, and send them one by one to the server device. Also, in case of a connection error, the service will try to resend the current photo. Given this, the algorithm for sending a picture through Bluetooth works as follows:

  1. Open socket communication with the Server application
  2. Send the number of bytes to send.
  3. Send the taken pictures as bytes.
  4. Close the connection

The following two lines of code do the first step:

bluetoothSocket = device.createInsecureRfcommSocketToServiceRecord(

For the second step and the third step, we must send two kinds of information using the same Output Stream, this is, the picture size in bytes, and the actual picture as bytes. This is because the Server application must know how many bytes are expected to be received. Which is performed by the following function:

private void sendFile(Uri, long size) throws IOException {
    BufferedInputStream bis = new
    try {
        mmInStream = bluetoothSocket.getInputStream();
        mmOutStream = bluetoothSocket.getOutputStream();


        int bufferSize = 1024;
        ByteBuffer bb = ByteBuffer.allocate(bufferSize);
        byte[] buffer = new byte[bufferSize];


        // we need to know how may bytes were read to write them to the byteBuffer
        int len = 0;
        //Send Header info
        mmOutStream.write(bb.array(), 0, bufferSize);
        while ((len = != -1) {
            mmOutStream.write(buffer, 0, len);
    } finally {

Last but not least, to close the connection a single line is necessary: 


This process will be repeated until the picture (or pictures) had been sent.

That’s it, we have successfully implemented a simple application that allows us to share pictures taken from a camera. But the application isn’t complete at this point, even if we manage to install the application on the Google glass device, the user will not have access to it. This is because the Google Glass interface doesn’t have an application explorer, given the limitation of user interaction. In order to allow the user to access our application, we can use the speech recognition system of the device. In order to do this, we just need to register the application to use such service in the application manifest file:

    android:enabled="true" >
        <action android:name="" />
        android:resource="@xml/activity_voice_trigger" />


Where activity_voice_triggeris just a XML variable that points to our application name string, in this case: 

<trigger keyword=“@string/app_voice_trigger_name”/>

The service class if conformed by the following code:

public class GoogleVoiceTriggerService extends Service {
    private static final String LIVE_CARD_TAG = "itexico_picture";


    private TimelineManager mTimelineManager;
    private LiveCard mLiveCard;


    public void onCreate() {
        mTimelineManager = TimelineManager.from(this);


    public IBinder onBind(Intent intent) {
        // TODO Auto-generated method
        return null;


    public int onStartCommand(Intent intent, int flags, int startId) {
        mLiveCard = mTimelineManager.createLiveCard(LIVE_CARD_TAG);
        Intent i = new Intent(this, MainActivity.class);


        return START_STICKY;


Now the application will be shown at the “Ok, Google” menu, which can be accessed by voice or by scrolling.

At his point we have ready and implementation for the client application that will run on the Glasses device, let’s continue with the Server client for the Android device.

Server application

The server application is only in charge of initializing all the required services that will listen to all petitions of the Client (Google glass) application. After the client application starts to send the picture, a notification will appear at the Device notification bar, showing also the current progress of the transfer. When the transfer is finished, the service returns to “listening state” and the process can be repeated. In resume, the following parts are needed for our application:

  1. Main activity to start our service.
  2. Bluetooth connection listening service.
  3. Notification for transfer progress.

The first part is pretty much simple; we just need to create an empty activity in charge of starting the Bluetooth listener service. This can be accomplished by the following function:

private void startBluetoothService() {
    Intent i= new Intent(context, BluetoothService.class);


The second part is performed by two subroutines. The first one is a simple listener task that will listen to a preregistered socket connection (in this case a set UUID), which is performed by the following lines of code:

mmServerSocket = bluetoothAdapter.listenUsingInsecureRfcommWithServiceRecord(
        BluetoothParametersHolder.NAME, BluetoothParametersHolder.uuids[0]);
while (true) {
    try {
        socket = mmServerSocket.accept();
        if (socket != null)
    } catch (Exception e) {

The first line registers the socket connection to hear whilst the second line loop is in charge of the continuous listening. The second subroutine is in charge of receiving the picture from the Glass device. For such task, first we must get the “header” which is the picture size in bytes, and after that, we must receive and save the actual picture’s bytes. To header for parsing is performed by the following code:

private void parseHeader(byte[] bytes) {
    ByteBuffer bb = ByteBuffer.wrap(bytes);
    totalBytesToReceieve = (int) bb.getLong();
    Log.i(TAG, "Bytes to receive: " + totalBytesToReceieve);

This way we can make the necessary calculation to show the progress notification at the device. Now, for the actual saving of a picture, the next code will be enough:

private void savePicture() {
    File dcimDir = new File(Environment.getExternalStorageDirectory(), "DCIM");
    String timeStamp = new SimpleDateFormat("yyyyMMdd_HHmmss",
        Locale.US).format(new Date());
    int read;
    boolean headerInfo = true;
    byte[] bytes = new byte[bufferSize];
    OutputStream = null;
    try {
        outputStream = new FileOutputStream(new
                File(dcimDir.getAbsolutePath(), "glass" + timeStamp + ".jpg"));
        while ((read = != -1) {
            if (headerInfo) {
                headerInfo = false;
            } else {
                outputStream.write(bytes, 0, read);
                this.onProgressUpdate(bytesProgress += read);
    } catch (IOException e) {
    } finally {
        if (outputStream != null)
            try {
            } catch (IOException e) {

Where the saving action is performed by the while loop, which will continue until no more bytes are available.

For the third and last part of our algorithm, we need to send a notification to the device. To perform this we use the onProgressUpdate method from the AsyncTask class. In there an interface is implemented which call the Service notification method. The method looks like this:

private void sendProgressNotification(int progress, boolean isComplete) {
    if(notification == null) {
        Intent = new Intent(this, MainActivity.class);
        PendingIntent pIntent = PendingIntent.getActivity(this, 0, intent, 0);
        notification = new NotificationCompat.Builder(this);
        notification.setContentTitle("Picture Download")
                .setContentText("Download in progress")
    if(!isComplete) {
        notification.setProgress(100, progress, false);
    } else {
        // When the loop is finished, updates the notification
        notification.setContentText("Download complete")
                // Removes the progress bar




As depicted, the method receives the current progress and a Boolean, in case that the transfer has been completed.

Final remarks

After been working with the Google glasses for a week, approximately, I find out how new this technology is and how it’s open to improvements. The first thing I noted is how easily it gets unstable, mainly because the overheating arisen from use of the camera preview and Bluetooth. Also the debugging is somewhat cumbersome and neck unfriendly (more if you only have a short USB cable) and the non-existence of a good emulator doesn’t help also.

Nevertheless, this new hardware opens a new world of possibilities for application development. From integration between Google glasses and Android devices (basically Smartphones and smart watches), augmented reality to even collaboration with computers, TVs or other Google Glass devices. But with a poor SDK documentation, the absence of some common and simple android API access (such as the Share intent accessibility), and the lack of a good simulator (given the price of the Hardware), the arrival of interesting application could take some time.

About the Author

Victor Cervantes is an Android developer with 1+ years of experience with the Android mobile platform. He has a Master’s Degree in Computer Science specializing in optimization problem solving. He is an Appcelerator Titanium Certified Developer (TCD) specialized in Anroid development.

Leave a Reply

Your email address will not be published. Required fields are marked *