Attending the GStreamer Conference 2017

This weekend I’ll be in Node5 (Prague) presenting our Media Source Extensions platform implementation work in WebKit using GStreamer.

The Media Source Extensions HTML5 specification allows JavaScript to generate media streams for playback and lets the web page have more control on complex use cases such as adaptive streaming.

My plan for the talk is to start with a brief introduction about the motivation and basic usage of MSE. Next I’ll show a design overview of the WebKit implementation of the spec. Then we’ll go through the iterative evolution of the GStreamer platform-specific parts, as well as its implementation quirks and challenges faced during the development. The talk continues with a demo, some clues about the future work and a final round of questions.

Our recent MSE work has been on desktop WebKitGTK+ (the WebKit version powering the Epiphany, aka: GNOME Web), but we also have MSE working on WPE and optimized for a Raspberry Pi 2. We will be showing it in the Igalia booth, in case you want to see it working live.

I’ll be also attending the GStreamer Hackfest the days before. There I plan to work on webm support in MSE, focusing on any issue in the Matroska demuxer or the vp9/opus/vorbis decoders breaking our use cases.

See you there!

UPDATE 2017-10-22:

The talk slides are available at https://eocanha.org/talks/gstconf2017/gstconf-2017-mse.pdf and the video is available at https://gstconf.ubicast.tv/videos/media-source-extension-on-webkit (the rest of the talks here).

Media Source Extensions upstreaming, from WPE to WebKitGTK+

A lot of good things have happened to the Media Source Extensions support since my last post, almost a year ago.

The most important piece of news is that the code upstreaming has kept going forward at a slow, but steady pace. The amount of code Igalia had to port was pretty big. Calvaris (my favourite reviewer) and I considered that the regular review tools in WebKit bugzilla were not going to be enough for a good exhaustive review. Instead, we did a pre-review in GitHub using a pull request on my own repository. It was an interesting experience, because the change set was so large that it had to be (artificially) divided in smaller commits just to avoid reaching GitHub diff display limits.

394 GitHub comments later, the patches were mature enough to be submitted to bugzilla as child bugs of Bug 157314 – [GStreamer][MSE] Complete backend rework. After some comments more in bugzilla, they were finally committed during Web Engines Hackfest 2016:

Some unforeseen regressions in the layout tests appeared, but after a couple of commits more, all the mediasource WebKit tests were passing. There are also some other tests imported from W3C, but I kept them still skipped because webm support was needed for many of them. I’ll focus again on that set of tests at its due time.

Igalia is proud of having brought the MSE support up to date to WebKitGTK+. Eventually, this will improve the browser video experience for a lot of users using Epiphany and other web browsers based on that library. Here’s how it enables the usage of YouTube TV at 1080p@30fps on desktop Linux:

Our future roadmap includes bugfixing and webm/vp9+opus support. This support is important for users from countries enforcing patents on H.264. The current implementation can’t be included in distros such as Fedora for that reason.

As mentioned before, part of this upstreaming work happened during Web Engines Hackfest 2016. I’d like to thank our sponsors for having made this hackfest possible, as well as Metrological for giving upstreaming the importance it deserves.

Thank you for reading.

 

Improving Media Source Extensions on WebKit ports based on GStreamer

During 2014 I started to become interested on how GStreamer was used in WebKit to play media content and how it was related to Media Source Extensions (MSE). Along 2015, my company Igalia strenghtened its cooperation with Metrological to enhance the multimedia support in their customized version of WebKitForWayland, the web platform they use for their products for the set-top box market. This was an opportunity to do really interesting things in the multimedia field on a really nice hardware platform: Raspberry Pi.

What are Media Source Extensions?

Normal URL playback in the <video> tag works by configuring the platform player (GStreamer in our case) with a source HTTP URL, so it behaves much like any other external player, downloading the content and showing it in a window. Special cases such as Dynamic Adaptive Streaming over HTTP (DASH) are automatically handled by the player, which becomes more complex. At the same time, the JavaScript code in the webpage has no way to know what’s happening with the quality changes in the stream.

The MSE specification lets the authors move the responsibility to the JavaScript side in that kind of scenarios. A Blob object (Blob URL) can be configured to get its data from a MediaSource object. The MediaSource object can instantiate SourceBuffer objects. Video and Audio elements in the webpage can be configured with those Blob URLs. With this setup, JavaScript can manually feed binary data to the player by appending it to the SourceBuffer objects. The data is buffered and the playback time ranges generated by the data are accessible to JavaScript. The web page (and not the player) has now the control on the data being buffered, its quality, codec and procedence.  Now it’s even possible to synthesize the media data programmatically if needed, opening the door to media editors and media effects coded in JavaScript.

mse1

MSE is being adopted by the main content broadcasters on the Internet. It’s required by YouTube for its dedicated interface for TV-like devices and they even have an MSE conformance test suite that hardware manufacturers wanting to get certified for that platform must pass.

MSE architecture in WebKit

WebKit is a multiplatform framework with an end user API layer (WebKit2), an internal layer common to all platforms (WebCore) and particular implementations for each platform (GObject + GStreamer, in our case). Google and Apple have done a great work bringing MSE to WebKit. They have led the effort to implement the common WebCore abstractions needed to support MSE, such as MediaSource, SourceBuffer, MediaPlayer and the integration with HTMLMediaElement (video tag). They have also provided generic platform interfaces (MediaPlayerPrivateInterface, MediaSourcePrivate, SourceBufferPrivate) a working platform implementation for Mac OS X and a mock platform for testing.

mse2

The main contributions to the platform implementation for ports using GStreamer for media playback were done by Stephane Jadaud and Sebastian Dröge on bugs #99065 (initial implementation with hardcoded SourceBuffers for audio and video), #139441 (multiple SourceBuffers) and #140078 (support for tracks, more containers and encoding formats). This last patch hasn’t still been merged in trunk, but I used it as the starting point of the work to be done.

GStreamer, unlike other media frameworks, is strongly based on the concept of pipeline: the data traverses a series of linked elements (sources, demuxers, decoders, sinks) which process it in stages. At a given point in time, different pieces of data are in the pipeline at the same time in varying degrees of processing stages. In the case of MSE, a special WebKitMediaSrc GStreamer element is used as the data source in the pipeline and also serves as interface with the upper MSE layer, acting as client of MediaSource and SourceBuffer. WebKitMediaSrc is spawned by GstPlayBin (a container which manages everything automatically inside) when an MSE SourceBuffer is added to the MediaSource. The MediaSource is linked with the MediaPlayer, which has MediaPlayerPrivateGStreamer as private platform implementation. In the design we were using at that time, WebKitMediaSrc was responsible for demuxing the data appended on each SourceBuffer into several streams (I’ve never seen more than one stream per SourceBuffer, though) and for reporting the statistics and the samples themselves to the upper layer according to the MSE specs. To do that, the WebKitMediaSrc encapsulated an appsrc, a demuxer and a parser per source. The remaining pipeline elements after WebKitMediaSrc were in charge of decoding and playback.

Processing appends with GStreamer

The MSE implementation in Chromium uses a chunk demuxer to parse (demux) the data appended to the SourceBuffers. It keeps the parsing state and provides a self-contained way to perform the demuxing. Reusing that Chromium code would have been the easiest solution. However, GStreamer is a powerful media framework and we strongly believe that the demuxing stage can be done using GStreamer as part of the pipeline.

Because of the way GStreamer works, it’s easy to know when an element outputs new data but there’s no easy way to know when it has finished processing its input without discontinuing the flow with with End Of Stream (EOS) and effectively resetting the element. One simple approach that works is to use timeouts. If the demuxer doesn’t produce any output after a given time, we consider that the append has produced all the MediaSamples it could and therefore has finished. Two different timeouts were used: one to detect when appends that produce no samples have finished (noDataToDecodeTimeout) and another to detect when no more samples are coming (lastSampleToDecodeTimeout). The former needs to be longer than the latter.

Another technical challenge was to perform append processing when the pipeline isn’t playing. While playback doesn’t start, the pipeline just prerolls (is filled with the available data until the first frame can be rendered on the screen) and then pauses there until the continuous playback can start. However, the MSE spec expects the appended data to be completely processed and delivered to the upper MSE layer first, and then it’s up to JavaScript to decide if the playback on screen must start or not. The solution was to add intermediate queue elements with a very big capacity to force a preroll stage long enough for the probes in the demuxer source (output) pads to “see” all the samples pass beyond the demuxer. This was how the pipeline looked like at that time (see also the full dump):

mse3

While focusing on making the YouTube 2015 tests pass on our Raspberry Pi 1, we realized that the generated buffered ranges had strange micro-holes (eg: [0, 4.9998]; [5.0003, 10.0]) and that was confusing the tests. Definitely, there were differences of interpretation between ChunkDemuxer and qtdemux, but this is a minor problem which can be solved by adding some extra time ranges that fill the holes. All these changes got the append feature in good shape and the we could start watching videos more or less reliably on YouTube TV for the first time.

Basic seek support

Let’s focus on some real use case for a moment. The JavaScript code can be appending video data in the [20, 25] range, audio data in the [30, 35] range (because the [20, 30] range was appended before) and we’re still playing the [0, 5] range. Our previous design let the media buffers leave the demuxer and enter in the decoder without control. This worked nice for sequential playback, but was not compatible with non-linear playback (seeks). Feeding the decoder with video data for [0, 5] plus [20, 25] causes a big pause (while the timeline traverses [5, 20]) followed by a bunch of decoding errors (the decoder needs sequential data to work).

One possible improvement to support non-linear playback is to implement buffer stealing and buffer reinjecting at the demuxer output, so the buffers never go past that point without control. A probe steals the buffers, encapsulates them inside MediaSamples, pumps them to the upper MSE layer for storage and range reporting, and finally drops them at the GStreamer level. The buffers can be later reinjected by the enqueueSample() method when JavaScript decides to start the playback in the target position. The flushAndEnqueueNonDisplayingSamples() method reinjects auxiliary samples from before the target position just to help keeping the decoder sane and with the right internal state when the useful samples are inserted. You can see the dropping and reinjection points in the updated diagram:

mse4

The synchronization issues of managing several independent timelines at once must also be had into account. Each of the ongoing append and playback operations happen in their own timeline, but the pipeline is designed to be configured for a common playback segment. The playback state (READY, PAUSED, PLAYING), the flushes needed by the seek operation and the prerolls also affect all the pipeline elements. This problem can be minimized by manipulating the segments by hand to accomodate the different timings and by getting the help of very large queues to sustain the processing in the demuxer, even when the pipeline is still in pause. These changes can solve the issues and get the “47. Seek” test working, but YouTube TV is more demanding and requires a more structured design.

Divide and conquer

At this point we decided to simplify MediaPlayerPrivateGStreamer and refactor all the MSE logic into a new subclass called MediaPlayerPrivateGStreamerMSE. After that, the unified pipeline was split into N append pipelines (one per SourceBuffer) and one playback pipeline. This change solved the synchronization issues and splitted a complex problem into two simpler ones. The AppendPipeline class, visible only to the MSE private player, is in charge of managing all the append logic. There’s one instance for each of the N append pipelines.

Each append pipeline is created by hand. It contains an appsrc (to feed data into it), a typefinder, a qtdemuxer, optionally a decoder (in case we want to suport Encrypted Media Extensions too), and an appsink (to pick parsed data). In my willing to simplify, I removed the support for all formats except ISO MP4, the only one really needed for YouTube. The other containers could be reintroduced in the future.

mse5

The playback pipeline is what remains of the old unified pipeline, but simpler. It’s still based on playbin, and the main difference is that the WebKitMediaSrc is now simpler. It consists of N sources (one per SourceBuffer) composed by an appsrc (to feed buffered samples), a parser block and the src pads. Uridecodebin is in charge of instantiating it, like before. The PlaybackPipeline class was created to take care of some of the management logic.

mse6

The AppendPipeline class manages the callback forwarding between threads, using asserts to strongly enforce the access to WebCore MSE classes from the main thread. AtomicString and all the classes inheriting from RefCounted (instead of ThreadSafeRefCounted) can’t be safely managed from different threads. This includes most of the classes used in the MSE implementation. However, the demuxer probes and other callbacks sometimes happen in the streaming thread of the corresponding element, not in the main thread, so that’s why call forwarding must be done.

AppendPipeline also uses an internal state machine to manage the different stages of the append operation and all the actions relevant for each stage (starting/stopping the timeouts, process the samples, finish the appends and manage SourceBuffer aborts).

mse7

Seek support for the real world

With this new design, the use case of a typical seek works like this (very simplified):

  1. The video may be being currently played at some position (buffered, of course).
  2. The JavaScript code appends data for the new target position to each of the video/audio SourceBuffers. Each AppendPipeline processes the data and JavaScript is aware of the new buffered ranges.
  3. JavaScript seeks to the new position. This ends up calling the seek() and doSeek() methods.
  4. MediaPlayerPrivateGStreamerMSE instructs WebKitMediaSrc to stop accepting more samples until further notice and to prepare the seek (reset the seek-data and need-data counters). The player private performs the real GStreamer seek in the playback pipeline and leaves the rest of the seek pending for when WebKitMediaSrc is ready.
  5. The GStreamer seek causes some changes in the pipeline and eventually all the appsrc in WebKitMediaSrc emit the seek-data and need-data events. Then WebKitMediaSrc notifies the player private that it’s ready to accept samples for the target position and needs data. MediaSource is notified here to seek and this triggers the enqueuing of the new data (non displaying samples and visible ones).
  6. The pending seek at player private level which was pending from step 4 continues, giving permission to WebKitMediaSrc to accept samples again.
  7. Seek is completed. The samples enqueued in step 5 flow now through the playback pipeline and the user can see the video from the target position.

That was just the typical case, but more complex scenarios are also supported. This includes multiple seeks (pressing the forward/backward button several times), seeks to buffered areas (the easiest ones) and to unbuffered areas (where the seek sequence needs to wait until the data for the target area is appended and buffered).

Close cooperation from qtdemux is also required in order to get accurate presentation timestamps (PTS) for the processed media. We detected a special case when appending data much forward in the media stream during a seek. Qtdemux kept generating sequential presentation timestamps, completely ignoring the TFDT atom, which tells where the timestamps of the new data block must start. I had to add a new “always-honor-tfdt” attribute to qtdemux to solve that problem.

With all these changes the YouTube 2015 and 2016 tests are green for us and YouTube TV is completely functional on a Raspberry Pi 2.

Upstreaming the code during Web Engines Hackfest 2015

All this work is currently in the Metrological WebKitForWayland repository, but it could be a great upstream contribution. Last December I was invited to the Web Engines Hackfest 2015, an event hosted in Igalia premises in A Coruña (Spain). I attended with the intention of starting the upstreaming process of our MSE implementation for GStreamer, so other ports such as WebKitGTK+ and WebKitEFL could also benefit from it. Thanks a lot to our sponsors for making it possible.

At the end of the hackfest I managed to have something that builds in a private branch. I’m currently devoting some time to work on the regressions in the YouTube 2016 tests, clean unrelated EME stuff and adapt the code to the style guidelines. Eventually, I’m going to submit the patch for review on bugzilla. There are some topics that I’d like to discuss with other engineers as part of this process, such as the interpretation of the spec regarding how the ReadyState is computed.

In parallel to the upstreaming process, our plans for the future include getting rid of the append timeouts by finding a better alternative, improving append performance and testing seek even more thoroughly with other real use cases. In the long term we should add support for appendStream() and increase the set of supported media containers and codecs at least to webm and vp8.

Let’s keep hacking!

Hacking on Chromium for Android from Eclipse (part 3)

In the previous posts we learnt how to code and debug Chromium for Android C++ code from Eclipse. In this post I’m going to explain how to open the ChromeShell Java code, so that you will be able to hack on it like you would in a normal Android app project. Remember, you will need to install the ADT plugin in Eclipse  and the full featured adb which comes with the standalone SDK from the official pageDon’t try to reuse the android sdk in “third_party/android_tools/sdk”.

Creating the Java project in Eclipse

Follow these instructions to create the Java project for ChromeShell (for instance):

  • File, New, Project…, Android, Android project from existing code
  • Choose “src/chrome/android/shell/java” as project root, because there’s where the AndroidManifest.xml is. Don’t copy anything to the workspace.
  • The project will have a lot of unmet package dependencies. You have to manually import some jars:
    • Right click on the project, Properties, Java build path, Libraries, Add external Jars…
    • Then browse to “src/out/Debug/lib.java” (assuming a debug build) and import these jars (use CTRL+click for multiple selection in the file chooser):
      • base_java.jar, chrome_java.jar, content_java.jar, dom_distiller_core_java.jar, guava_javalib.jar,
      • jsr_305_javalib.jar, net_java.jar, printing_java.jar, sync_java.jar, ui_java.jar, web_contents_delegate_android.jar
    • If you keep having problems, go to “lib.java”, run this script and find in which jar is the class you’re missing:
for i in *.jar; do echo "--- $i ---------------------"; unzip -l $i; done | most
  • The generated resources directory “gen” produced by Eclipse is going to lack a lot of stuff.
    • It’s better to make it point to the “right” gen directory used by the native build scripts.
    • Delete the “gen” directory in “src/chrome/android/shell/java” and make a symbolic link:
ln -s ../../../../out/Debug/chrome_shell_apk/gen .
    • If you ever “clean project” by mistake, delete the chrome_shell_apk/gen directory and regenerate it using the standard ninja build command
  • The same for the “res” directory. From “src/chrome/android/shell/java”, do this (and ignore the errors):
cp -sr $PWD/../res ./
cp -sr $PWD/../../java/res ./
  • I haven’t been able to solve the problem of integrating all the string definitions. A lot of string related errors will appear in files under “res”. By the moment, just ignore those errors.
  • Remember to use a fresh standalone sdk. Install support for Android 4.4.2. Also, you will probably need to modify the project properties to match the same 4.4.2 version you have support for.

And that’s all. Now you can use all the Java code indexing features of Eclipse. By the moment, you still need to build and install to the device using the command line recipe, though:

 ninja -C out/Debug chrome_shell_apk
 build/android/adb_install_apk.py --apk ChromeShell.apk --debug

Debugging Java code

To debug the Java side of the app running in the device, follow the same approach that you would if you had a normal Java Android app:

  • Launch the ChromeShell app manually from the device.
  • In Eclipse, use the DDMS perspective to locate the org.chromium.chrome.shell process. Select it in the Devices panel and connect the debugger using the “light green bug” icon (not to be mistaken with the normal debug icon available from the other perspectives).
  • Change to the Debug perspective and set breakpoints as usual.

Enjoy!

chromium_android_eclipse_java

Hacking on Chromium for Android from Eclipse (part 2)

In the previous post, I showed all the references to get the Chromium for Android source code, setup Eclipse and build the ChromeShell app. Today I’m going to explain how to debug that app running in the device.

Debugging from command line

This is the first step that we must ensure to have working before trying to debug directly from Eclipse. The steps are explained in the debugging-on-android howto, but I’m showing them here for reference.

Perform the “build Chrome shell” steps but using debug parameters:

 ninja -C out/Debug chrome_shell_apk
 build/android/adb_install_apk.py --apk ChromeShell.apk --debug

To avoid the need of having a rooted Android device, setup ChromeShell as the app to be debugged going to Android Settings, Debugging in your device. Now, to launch a gdb debugging session from a console:

 cd ~/ANDROID/src
 . build/android/envsetup.sh
 ./build/android/adb_gdb_chrome_shell --start

You will see that the adb_gdb script called by adb_gdb_chrome_shell pulls some libraries from your device to /tmp. If everything goes fine, gdb shouldn’t have any problem finding all the symbols of the source code. If not, please check your setup again before trying to debug in Eclipse.

Debugging from Eclipse

Ok, this is going to be hacky. Hold on your hat!

Eclipse can’t use adb_gdb_chrome_shell and adb_gdb “as is”, because they don’t allow gdb command line parameters. We must create some wrappers in $HOME/ANDROID, our working dir. This means “/home/enrique/ANDROID/” for me. The wrappers are:

Wrapper 1: adb_gdb

This is a copy of  ~/ANDROID/src/build/android/adb_gdb with some modifications. It calculates the same as the original, but doesn’t launch gdb. Instead, it creates two symbolic links in ~/ANDROID:

  • gdb is a link to the arm-linux-androideabi-gdb command used internally.
  • gdb.init is a link to the temporary gdb config file created internally.

These two files will make the life simpler for Eclipse. After that, the script prints the actual gdb command that it would have executed (but has not), and reads a line waiting for ENTER. After the user presses ENTER, it just kills everything. Here are the modifications that you have to do to the original adb_gdb you’ve copied. Note that my $HOME (~) is “/home/enrique”:

 # In the begining:
 CHROMIUM_SRC=/home/enrique/ANDROID/src
 ...
 # At the end:
 log "Launching gdb client: $GDB $GDB_ARGS -x $COMMANDS"

 rm /home/enrique/ANDROID/gdb
 ln -s "$GDB" /home/enrique/ANDROID/gdb
 rm /home/enrique/ANDROID/gdb.init
 ln -s "$COMMANDS" /home/enrique/ANDROID/gdb.init
 echo
 echo "---------------------------"
 echo "$GDB $GDB_ARGS -x $COMMANDS"
 read

 exit 0
 $GDB $GDB_ARGS -x $COMMANDS &&
 rm -f "$GDBSERVER_PIDFILE"

Wrapper 2: adb_gdb_chrome_shell

It’s a copy of ~/ANDROID/src/build/android/adb_gdb_chrome_shell with a simple modification in PROGDIR:

 PROGDIR=/home/enrique/ANDROID

Wrapper 3: gdbwrapper.sh

Loads envsetup, returns the gdb version for Eclipse if asked, and invokes adb_gdb_chrome_shell. This is the script to be run in the console before starting the debug session in Eclipse. It will invoke the other scripts and wait for ENTER.

 #!/bin/bash
 cd /home/enrique/ANDROID/src
 . build/android/envsetup.sh
 if [ "X$1" = "X--version" ]
 then
  exec /home/enrique/ANDROID/src/third_party/android_tools/ndk/toolchains/arm-linux-androideabi-4.8/prebuilt/linux-x86_64/bin/arm-linux-androideabi-gdb --version
  exit 0
 fi
 exec ../adb_gdb_chrome_shell --start --debug
 #exec ./build/android/adb_gdb_chrome_shell --start --debug

Setting up Eclipse to connect to the wrapper

Now, the Eclipse part. From the “Run, Debug configurations” screen, create a new “C/C++ Application” configuration with these features:

  • Name: ChromiumAndroid 1 (name it as you wish)
  • Main:
    • C/C++ Application: /home/enrique/ANDROID/src/out/Debug/chrome_shell_apk/libs/armeabi-v7a/libchromeshell.so
    • IMPORTANT: From time to time, libchromeshell.so gets corrupted and is truncated to zero size. You must regenerate it by doing:

rm -rf /home/enrique/ANDROID/src/out/Debug/chrome_shell_apk
ninja -C out/Debug chrome_shell_apk

    • Project: ChromiumAndroid (the name of your project)
    • Build config: Use active
    • Uncheck “Select config using C/C++ Application”
    • Disable auto build
    • Connect process IO to a terminal
  • IMPORTANT: Change “Using GDB (DSF) Create Process Launcher” and use “Legacy Create Process Launcher” instead. This will enable “gdb/mi” and allow us to set the timeouts to connect to gdb.
  • Arguments: No changes
  • Environment: No changes
  • Debugger:
    • Debugger: gdb/mi
    • Uncheck “Stop on startup at”
    • Main:
      • GDB debugger: /home/enrique/ANDROID/gdb (IMPORTANT!)
      • GDB command file: /home/enrique/ANDROID/gdb.init (IMPORANT!)
      • GDB command set: Standard (Linux)
      • Protocol: mi2
      • Uncheck: “Verbose console”
      • Check: “Use full file path to set breakpoints”
    • Shared libs:
      • Check: Load shared lib symbols automatically
  • Source: Use the default values without modification (absolute file path, program relative file path, ChromiumAndroid (your project name)).
  • Refresh: Uncheck “Refresh resources upon completion”
  • Common: No changes.

When you have everything: apply (to save), close and reopen.

Running a debug session

Now, run gdbwrapper.sh in an independent console. When it pauses and starts waiting for ENTER, change to Eclipse, press the Debug button and wait for Eclipse to attach to the debugger. The execution will briefly pause in an ioctl() call and then continue.

To test that the debugging session is really working, set a breakpoint in content/browser/renderer_host/render_message_filter.cc, at content::RenderMessageFilter::OnMessageReceived and continue the execution. It should break there. Now, from the Debug perspective, you should be able to see the stacktrace and access to the local variables.

Welcome to the wonderful world of Android native code debugging from Eclipse! It’s a bit slow, though.

This completes the C++ side of this series of posts. In the next post, I will explain how to open the Java code of ChromeShellActivity, so that you will be able to hack on it like you would in a normal Android app project.

chromium_android_eclipse

Hacking on Chromium for Android from Eclipse (part 1)

In the Chromium Developers website has some excellent resources on how to setup an environment to build Chromium for Linux desktop and for Android. There’s also a detailed guide on how to setup Eclipse as your development environment, enabling you to take advantage of code indexing and enjoy features such as type hierarchy, call hierarchy, macro expansion, references and a lot of tools much better than the poor man’s trick of grepping the code.

Unfortunately, there are some integration aspects not covered by those guides, so joining all the dots is not a smooth task. In this series of posts, I’m going to explain the missing parts to setup a working environment to code and debug Chromium for Android from Eclipse, both C++ and Java code. All the steps and commands from this series of posts have been tested in an Ubuntu Saucy chroot. See my previous post on how to setup a chroot if you want to know how to do this.

Get the source code

See the get-the-code guide. Don’t try to reconvert a normal Desktop build into an Android build. It just doesn’t work. The detailed steps to get the code from scratch and prepare the dependencies are the following:

 cd ANDROID # Or the directory you want
 fetch --nohooks android --nosvn=True
 cd src
 git checkout master
 build/install-build-deps.sh
 build/install-build-deps-android.sh
 gclient sync --nohooks

Configure and generate the project (see AndroidBuildInstructions), from src:

 # Make sure that ANDROID/.gclient has this line:
 # target_os = [u'android']
 # And ANDROID/chromium.gyp_env has this line:
 # { 'GYP_DEFINES': 'OS=android', }
 gclient runhooks

Build Chrome shell, from src:

 # This builds
 ninja -C out/Release chrome_shell_apk
 # This installs in the device
 # Remember the usual stuff to use a new device with adb:
 # http://developer.android.com/tools/device.html 
 # http://developer.android.com/tools/help/adb.html#Enabling
 # Ensure that you can adb shell into the device
 build/android/adb_install_apk.py --apk ChromeShell.apk --release

If you ever need to update the source code, follow this recipe and use Release or Debug at your convenience:

 git pull origin master
 gclient sync
 # ninja -C out/Release chrome_shell_apk
 ninja -C out/Debug chrome_shell_apk
 # build/android/adb_install_apk.py --apk ChromeShell.apk --release
 build/android/adb_install_apk.py --apk ChromeShell.apk --debug

As a curiosity, it’s worth to mention that adb is installed on third_party/android_tools/sdk/platform-tools/adb.

Configure Eclipse

To configure Eclipse, follow the instructions in LinuxEclipseDev. They work nice with Eclipse Kepler.

In order to open and debug the Java code properly, it’s also interesting to install the ADT plugin in Eclipse too. Don’t try to reuse the Android SDK in “third_party/android_tools/sdk”. It seems to lack some things. Download a fresh standalone SDK from the official page instead and tell the ADT plugin to use it.

In the next post, I will explain how to debug C++ code running in the device, both from the command line and from Eclipse.

chromium_android_eclipse_cpp