No amount of documentation can substitute for working directly with these complicated platforms: This project illustrated some disconnections between expectations and reality. Using a lesson I learned from the previous platform hands-on project, I went to a Davinci technical seminar. I thought the full-day session would be a hands-on workshop, but I was wrong. (The first hands-on technical workshop for Davinci would not occur until after I had finished this project and just before the print date of this article.) The technical seminar presented the technical and business aspects of the platform, but there was no hands-on portion. I was surprised by the number of Linux-implementation details the seminar presented. Later, when I had a Davinci evaluation module in my office to work with, I figured out why.
The first half of the seminar presented a nice overview of the system components and each of the peripherals, particularly some coding examples the video-processing subsystem uses. We learned about the Davinci Framework API and the high-level portions of each of the processing layers: application, signal processing, codec engine, and third-party software. The overview also covered the internal and third-party development tools that support each programming layer. The rest of the day consisted of third-party authorized-software-provider presentations.
I received a Davinci evaluation module to work with a few weeks later. It was a quick and easy process to connect all of the parts of the system and get the demonstrations operating correctly. My ability to get the demonstrations running quickly was due in no small part to the fact that all of the software resided on the hard drive. The nice thing, though, was that I could confirm that each component and interface was properly connected and operating correctly before setting up my workbench.
I received an unexpected shock when I went to set up my work space. I have done plenty of cross-platform development. For this project, I knew I would be developing to a Linux target. However, I didn't know that I would be working on a Linux-development host; I had assumed that I would be able to work on a Windows-development host. All of the initial tools for the evaluation module operate only on a Linux host. Montavista supplied the development tools, so I checked Montavista's Web site to confirm that it supported host environments besides Linux. The company's development tools support Linux-, Solaris-, and Windows-development hosts.
TI's support personnel told me that the Davinci tools supported only a Linux-development host. I did not want to install Linux on my computer. This issue had nothing to do with a dislike of Linux; after all, Linux was to be the target operating system. Rather, it had everything to do with having to expend time, energy, and thought on a low-value effort for the project. I had no plans to continue using Linux on my host system, and I needed to fully understand the Linux configuration for the target. However, I lacked the time to relearn my host computer's operating system. Working in a Linux host environment increased my learning curve without helping me understand Linux on the target.
Fortunately, TI support shipped me a Red Hat Linux image and granted me use of a license for this project, so that I could emulate Linux on my computer with VMware player. This help allowed me to avoid manually setting up my computer for dual boot and to more quickly continue with the project.
I know how to find the data files and applications on my computer running Windows. The applications behave and interoperate as I have come to expect. I have years of lessons learned in how those applications behave. Emulating Linux was not a debilitating experience, but there are differences in its behavior from Windows applications. As an example, I used the supplied Gnome Ghostview to read the supplied Adobe Acrobat documents. I never could figure out whether I could perform a simple text-string search within the browser. I could have searched the Internet for a different browser, but why should I have to do that when I already have a perfectly acceptable browser?
The evaluation module supports only a Linux host because the Davinci-tools team wanted the tools to be available at the same time the evaluation module became available. Also, TI expected most of the early adopters to use a Linux target and would have experience with Linux as a development-host environment. The TI tool group acknowledged that support for other host-development operating systems is a next step. As it turns out, using Linux was also a significant learning curve for the TI support team because it had not worked with it before and because the team needed to be proficient with the system before the company started shipping the Davinci evaluation module to customers.
The project plan was to use the encoding-, decoding-, and networking-demonstration code to make a crude video phone. One problem was that I had only one Davinci evaluation module, so the demonstration could not perform in real time; I would have to simulate one end or the other of the system on each run. Obtaining the first evaluation module was a challenge; TI had only a limited supply. And there was not enough time to obtain a second module. Another problem was that the encoding and decoding examples that came with the module processed either speech or video data, but not both, so I had no example of how to synchronize the two. In my conversations with TI's tools group, I learned that the company was looking at the GStreamer media-processing library as a framework to help with audio and video synchronization and other capabilities. GStreamer sits above the level of a normal library. Late in the project, I started working with Ittiam using its video-phone demonstration.
For this project, I also tried out Green Hills Software's Probe and Multi development environment for Davinci. The Probe is a hardware-debugging device that connects to Davinci. Even though Green Hills Software's Multi normally supports development on Windows, Linux, Solaris, and HP-UX host systems, the Davinci tools operated only under Linux during this project. The Multi integrated development environment supports the needs of each of the programming layers: application, DSP, and system. With the Probe, Multi provides visibility to both the ARM and the DSP cores, as well as supports Linux-kernel awareness. I was able to trace directly into the Linux kernel by building the Linux-kernel image with debugging information turned on and then translating, with Green Hills' dblink tool, the dwarf debugging information that GCC (GNU Compiler Collection) generated to a debugging format that Multi understands.
Multi separates process debugging into windows, each with its own background color; this feature is useful when using process-context breakpoints, which stops execution of the code only when the breakpointed line of code is executing as part of the specified process. I was also able to use the Time Machine capability; after you stop the processor core, it lets you step the instructions backward and forward. Green Hills recently added an always-on feature to Time Machine, which greatly increases its usability and helps developers because they no longer need to remember to turn on the Probe to capture an event.
Beyond the platform
Another difference between the ecosystems for the Davinci and OMAP platforms is that TI now offers to act as a first-tier point of contact for technical support and licensing of third-party hardware and software IP (intellectual property). This move simplifies the design team's efforts to acquire and use third-party IP and enables the support team to track problems and issues and more quickly share relevant information among those groups with similar issues. It also enables TI to more easily see trends in technical and business challenges, as well as requests for feature support, so that the company can react quickly to emerging opportunities.
The Davinci evaluation module includes another less obvious difference from the OMAP development kit that is consistent with a growing trend in embedded designs. Both platforms are heterogeneous, multicore systems. However, the Davinci module includes yet a third different processor core on the board to perform support functions: an ultralow-power, 16-bit MSP430 RISC mixed-signal microcontroller, which operates with and controls the nine LEDs, IR interface, and real-time clock on the system board. Designers access the MSP430 by reading and writing to the on-chip I2C registers. The module employs three software-processing architectures to deliver system capabilities.
Employing multiple processing architectures is a growing trend for complex embedded-system applications. Examples include NXP's Nexperia platform, which combines MIPS cores with TriMedia cores, along with hardware accelerators and media-processing peripherals. The Nexperia platform employs an API analogous to the Davinci and OMAP platforms. NEC Electronics' EMMA (Enhanced Multimedia Architecture) platform features devices with as many as 16 processor cores, including 32-bit RISC, 32-bit RISC with DSP, and 64-bit RISC architectures, along with hardware accelerators and stream processors in the same device. An increasing number of tools simplify or help automate the creation and use of custom hardware accelerators and coprocessors for use alongside application processors and DSPs.
The continuing growth of complex embedded designs using multiple heterogeneous-processing architectures in a single design represents an area of opportunity for software-development tools to assist in system-level partitioning, identifying and implementing concurrency in software, and verifying operation and interoperability of all the processing engines. It also represents a glimpse at how the industry might be able to finally build and distribute reusable software components because developers could design each processing architecture or core to limit interaction with software on other cores. Platform providers can more safely provide commodity functions implemented as software on dedicated processors by locking access to those dedicated processors for all but those customers willing to risk breaking the encapsulation.
Mechanisms that support easier reuse and reliable interoperability of software components are critical capabilities for abstracting and scaling software complexity. To date, efforts to accomplish this goal have met with limited success, but domain-specific organizations are trying to specify and create a working model for interoperability between software components. Examples of such organizations are CE Linux Forum, the Digital Living Network Alliance, and the SDR Forum.
As the complexity of these processing platforms continues to increase, the support ecosystem for them will be more critical to the success of both the platform provider and partners and the design teams that use these platforms. A key concept that these early ecosystems are demonstrating is how a wide audience of application-level designers can rapidly and safely leverage and incorporate the work of a few expert users of a processing architecture in the context of a domain problem.