Version 7 (modified by dennisr, 7 years ago) (diff)


Specifying the configuration of your Virtual Human

Every application has different requirements for the Virtual Human. Gesture repertoire, body graphics, using speech engine or not using speech engine, which voice, etcetera. We have made an XML format to specify in detail what should and should not be initialized for your virtual human.

main idea: you can specify Embodiments and Engines. Each Embodiment offers an interface to display expressions and behaviors: skeleton control for animation, MPEG4 control for a face embodiment,motor control for a robot embodiment, etcetera. An Engine specializes in displaying specific behavior types (e.g., head and hand gestures) on one or more of the Embodiments. In an Asap Virtual Human Loader file you specify the various embodiments and engines, and connect the embodiments to the right engines.

Start of documentation

General structure: the file consists of 3 sections, that must appear in that order.


   Subsection: Specification of parsing, scheduling, pipes, ports, and adapters

   Subsection: Specification of Embodiments, Engines, and other loaders

   Subsection: Re-routing of BML to Engines (overriding defaults)


parsing, scheduling, pipes, ports, and adapters

Log pipe Module: in Asap core Allows you to log all incoming requests and outgoing feedback messages to an Slf4j channel. Configurable: the logger name for requests and for feedback.

<PipeLoader id="..." loader="asap.realizerembodiments.LogPipeLoader">
        <Log requestlog="..."  feedbacklog="..."/> 

ActiveMQPipe Module: org=HMI name=HMIActiveMQPipe Connects your realizer seamlessly to the ActiveMQ network, through the asap.bml.request and channels. No configuration settings; connects to ActiveMQ on localhost, default port.

<PipeLoader id="..." loader="hmi.activemq.bmlpipe.ActiveMQPipeLoader"/>

User contributions

This page still requires a lot more documentation. Please ask specific questions below.


Your feedback is welcome.