Normally in a Thief conversation, each line of speech is assigned to the character who says it, and only one line may be said at one time. This removes the ability to have some aspects of natural conversation such as one person talking over the other, having someone butt in abruptly, have someone cough in the middle of another’s words, etc. There is, however, a workaround. The key is to have a single *.WAV file contain all the speech, with coughs, interruptions, etc., just as you want them.
Making the *.WAV – by Andrew Dagilis
Before anything technical happens, you must first create your conversation on paper. In the case of Festus and Doofus (Ranstall Keep’s comedy relief duo), this means creating a verbal skit. A much longer tutorial could be written explaining the mechanics involved in a good skit – timing, pauses, doubletakes, verbal pacing, choice of words and delivery, appropriate voice tone and inflection, etc. – but for now, suffice it to say that you should spend at least as much time writing and rewriting your skit’s dialogue as you do recording and mixing it.
Once your skit is written, rehearse it over and over and over. I’m assuming you’ll be your own voice actor – if other people will be performing it, they need to rehearse it over and over. Even seasoned pros have trouble delivering an excellent performance from a cold reading, so don’t be shy about locking yourself in a room and repeating your lines until you’re sick of the sound of your own voice. If you’re going to do any experimenting, this is the time to do it, not when the red Record light is flashing.
Now you have a well-written skit, a good idea of how you’ll deliver it and an intimate knowledge of the text – time to fire up the microphone.
In a longer tutorial devoted entirely to sound recording, mixing and producing (upcoming on the TTLG site), I describe hardware and software basics. For the purposes of this tutorial, we’ll assume you’re using one of the better sound recording applications – Cakewalk Pro Audio, SoundForge, CoolEdit Pro, Cubase Audio, Deck (for the Mac), SAW Plus, Pro Tools, etc. – and a good unidirectional microphone equipped with a pop filter.
The whole trick of creating a .WAV file with overlapping dialogue is to record it in stereo, with one character assigned to the left channel and the other one to the right channel. Skits with more than two characters will require either greater multitracking capabilities from your recording application (CoolEdit Pro can record up to 64 tracks simultaneously, if you have the RAM for it, while Cakewalk Pro Audio can go up to 256 tracks — 128 tracks in two-channel stereo), or else that you plan your recording session very carefully on paper beforehand and assign each character to the right or the left channel so that his/her dialogue is never contiguous with another character recorded on the same channel. Using different dynamics (levels of relative loudness and softness) for these various characters also helps when working with limited multitracking environments, as does employing radically different vocal ranges, accents, tones, etc. for each one.
As an example, let’s examine how the “contingency” skit for Ranstall Keep’s The Mountain Trail was created.
I used my PowerMac and Deck v.2.65, with an AKG C414 microphone set to record in a cardioid pattern plugged straight into my Mackie 1202-VLZ 12-track mixer. Playback and monitoring was done while wearing a pair of Sony MDR-7509 professional headphones and then through a set of KRK reference speakers. Doofus was assigned to the left channel and Festus to the right one. I set the stereo width at 10:00 and 14:00 since, in the mission, the two are standing fairly close to one another — 11:00 and 13:00 might have been even better.
I then proceeded to record all of Doofus’s lines in the left channel (at a CD-quality sampling rate of 44.1 kHz), pausing between each one. The length of each pause should be of at least two seconds (to give you some maneuvering room during the editing phase) but does not have to reproduce the precise duration of the conversation’s actual pauses – you’ll determine those when you edit your session later. Once Doofus was recorded to my liking, I reset the session to zero (I “rewound the tape”, so to speak) and recorded Festus’s lines in the right channel.
Once both sides of the conversation are recorded to your liking, open your application’s editing window. You now have to clean up each character’s track, deleting the between-lines breathing, lip smacking, teeth clicking, nostril whistling and other inevitable artifacts of microphone recording. Do this while wearing a good set of headphones in order to really zoom into each track.
If your recording application allows for track sectioning and drag-and-drop editing, your job is almost done (this refers to your ability to erase sections of a given track without the unerased sections then joining together seamlessly). Simply move each phrase along the track so that the first word of the next reply (on the other track) starts playing while the first character is still speaking the last few words of his/her given line. You’ll have to experiment with each bit of overlap, making it start earlier and later, so that it sounds as natural as possible.
If your recording software does not allow for track sectioning (when you delete a section, the remaining portions snap and weld together in a contiguous track), you’ll have to create your pauses by inserting silences of various lengths between your characters’s lines. The process is more tedious but the results are identical.
Once the conversation has been edited and rebuilt to your liking, mix it down to a single stereo clip at 44.1 kHz, 16-bit sound quality. Now you can start layering your audio sweetening (reverb, chorusing, echoes, etc.). Such effects can be added either by running your original signal through a dedicated hardware unit such as those found in most professional recording studios (the Ensoniq DP/2 or DP/4+, the Lexicon LXP, the Roland GP-100, the FXR Elite, the Alesis Quadraverb, etc.) or by processing your clip with one of the various software effects packages available for digital recording (CoolEdit Pro, SoundForge, JVP, MDT, Hyperprism, PEAK, SoundEdit, DSP/FX, Cybersound FX, WaveLab, TC Tools, etc.).
For F&D;’s “contingency” clip, Alex and I decided that since the conversation was a one-shot event and very localized, it should have its own innate reverb rather than assigning this function to EAX. It’s important that you NOT add innate reverb to a clip which is to be processed through EAX since reverb (and all other similar time-delay effects) is an additive process, meaning that any reverberation supplied by EAX will be added on top of what is already in your clip, making the end result unintelligibly muddy.
Once the reverb was added, the entire clip was normalized. Normalization is a specialized gain-related (loudness-related) digital process which optimizes a soundfile’s dynamic range (the gap between the loudest and the softest possible signal) by automatically determining the amount of gain (overall volume increase) that would be required to raise the clip’s loudest peak so that it comes closest to reaching the system’s dynamic ceiling (the loudest a signal can be before exceeding a system’s dynamic range and saturating itself with digital noise). Once this amount of gain is calculated, the amplitude (overall strength of the signal) is increased by this gain ratio, making the entire clip much louder. The two most common methods used by good software applications to calculate normalization ratios are by peak level and average RMS power — if you’re after sheer loudness rather than dynamic finesse, opt for peak level normalizing.
F&D;’s stereo “contingency” clip runs for 37 seconds, which means the uncompressed .WAV is about 6 megs in size (stereo .WAVs recorded at 44.1 kHz average out to approximately 10 megs per minute), way too big to be inserted in a mission – the clip had to be downsampled. I made different versions (22 kHz and 11 kHz), some in 8-bit sound, and sent them all to Alex so that she could choose the one she preferred – one was even in mono. I stayed away from the DVI/IMA ADPCM format (used in most of the sound files of both THIEF and THIEF 2, which compresses 44.1 kHz .WAVs to a much smaller 22 kHz 4-bit format) because the dialogue ran long enough for quite a bit of hissiness to creep into it; also, Festus’s high-pitched rasp became irritatingly scratchy when saved as an ADPCM clip. Simple downsampling introduced some hissiness (especially when converted to 8-bit sound), but with a bit of judicious re-EQing, Festus’s raspiness never became painful to hear.
Since F&D; are to be recurring characters (they appear in the last four chapters of the Ranstall Keep episode), care must be taken to make them visible and/or audible but inaccessible to the player — it wouldn’t do for them to be killed in mission two only to have them reappear, mysteriously hale and hearty, in mission three. They can still pose a threat to the player, however, by tripping alarms and triggering traps and other guards who can inflict damage on the player. Because they can react to noise made by the player, it’s important that the conversation be made interruptible, or else the dialogue would continue incongruously while the pair is busy searching for the source of the noise – see Alex’s portion of the tutorial for more on this subject.
Placing the conversation in the mission – by Alex
Before going on, you may wish to familarise yourself with general conversations by reading CONVERSATION GUIDE by Deep Qantas. You should also find the name for the line of conversation you wish to replace by looking for it in the original mission and taking note of the exact filename.
Step A – Setting up the conversation
Start by placing a marker at the point where you wish the conversation to play, ie. where your AI will be standing. Rename the marker by double-clicking the object name and entering a name of your choice in the box. This will make it easier to find if you ever need to change its properties at a later stage.
The marker needs the correct script to make it play the conversation. This script is “TrapConverse”. Open the marker’s properties and select add->s->scripts.
And type TrapConverse in the Script 0 box.
Now link all the AIs using AIConversationActor links. With the marker chosen, select links in the bottom left of the screen.
Press the “add” button and then fill in the box as follows:
Flavor: AIConversationActorFrom: The MarkerTo: The AI
OK the link, then highlight it in the link’s number and select data. This brings up the following box:
Enter 1 in the box – this is the number the AI will be known as throughout the conversation.
Repeat the above for all your AIs, giving each one a unique number, e.g., the second AI should be “2”, the third “3” etc.
Now that we have specified all our actors, we can set up the conversation itself. Go into the marker’s properties and select add->AI->Conversations->save conversation.
And check the box that comes up.
Now select add->AI->Conversations->conversation
Which should bring up the following box.
Each of the numbers within this box are steps in the conversation. Double-click 01 not 00 (00 will be used later). The box that appears needs to be filled in as follows.
Actor: The Actor who will perform that segment of the conversation. Select Actor 1.Flags: See aforementioned guide for details.ConversationAction1: This allows you to choose whether the actor speaks, frobs a button, moves, etc. Set to “Play sound/motion”.Argument 1: Enter the name of the conversation line you wish to be played here.Argument 2: Enter the Line Number (LineNo). In this case it is “LineNo 1”.Argument 3: Used for other commands. Leave empty.
The conversation is now set up to play the *.wav you made, but it needs something to trigger it to start. If you wish the conversation to start when you enter an area, use a BoundsTrigger. Place it where you want the player to be when the conversation starts, then link it to the marker. Select links->add, then enter the following:
Flavor: Control Device.From: The BoundsTrigger.To: The marker.
If you go into test mode now, your conversation should play when you walk through the BoundsTrigger. You will notice, however, that if you walk through it a second time it will restart. To eliminate this, you need to create a Destroy Trap. Link the BoundsTrigger to the DestroyTrap.
And then link the DestroyTrap back to the BoundsTrigger.
This will cause the BoundsTrigger to be destroyed after firing once.
Step B – Bringing the conversation to life
Having tested the conversation, you may have noticed several things:Only the first Actor appears involved. The others merely stand around muttering to themselves.If you alert one of the other AIs, he will search for you whilst his disembodied words continue to play.Actor One stands around like a statue whilst the conversation plays.
We need to involve the other AIs directly by adding movement to the conversation. In a conventional conversation, this would be done as each line is played, but that approach does not work here as we only play a single *.wav. Instead we must create an individual conversation for each AI and time each actor’s movements to match the speech in the *.wav.
Follow the method described above to create a conversation for each of your AIs (your first AI uses the original conversation) – stopping at the stage where you add the AI->conversations->conversation property. These individual conversations will control the movement of each AI. If you only made one additional conversation, you would have a reduced amount of movement available for each AI. Individual markers also allow AIs to act independently, so one AI can wave his arms about whilst another is shuffling his feet.
Before adding any movement, we must first make sure all the conversations are being triggered. Rather than mess with the order of the links to the BoundsTrigger, I opted to have the first conversation trigger the others to start. This is why step “00” of the conversation was left open.
Create a button and place it within a “blue room”. Give the button a unique name, as you did with the conversation markers. Now we need to set it up to be the control device for all the additional conversations. Highlight the button and select link->add. For each conversation, set up the following link.
Flavor: Control Device.From: The Button.To: The marker.
Now we need to tell Actor One to press this button when his conversation starts. Select the original marker and select properties->AI->conversations->conversation by double-clicking on it. This should bring up the “steps” box again. Now select step “00” by double-clicking it, and fill in the box as follows:
Actor: Actor OneFlags: NoneConversation Acion 0: Frob ObjectArgument 1: Your button’s unique nameArgument 2: EmptyArgument 3: Empty
Now when you go into game-mode, all your conversations should start when you walk through the BoundsTrigger. However you cannot test this until you have added motions to these conversations.
The success of these conversations is highly dependent on synchronizing the timing of the movements correctly. You will have to use a series of wait and motion commands to get the result you desire. I will describe how to set up a single movement after a short pause.
To implement the pause, open the steps box for your second conversation and double-click “00” – for the original conversation use step “01” – and set up the box as follows:
Actor: The correct actor – in this case ActorTwo.Flags: NoneConversation: Action 0: Wait.Argument 1: The time the actor should wait – set this so his/her movements match the speech.Argument 2: Empty.Argument 3: Empty.
The time specified is in milliseconds. In the example shown, the actor will wait for 3 seconds before performing the next step in the conversation.
To implement the motion, decide what movement you wish performed (there is a small list of options in DEEP QUANTAS’S TUTORIAL) and note its name. Open the step “00” again if you have closed it. Fill in the next section of the box as explained below.
Actor: The correct actor – in this case ActorTwo.Flags: NoneConversation: Action 1: Play sound/motion.Argument 1: Empty.Argument 2: Empty.Argument 3: Your chosen movement.
Now pop into game mode and test your newly-added motion. You will notice that once actor 2 has done his movement, he will revert to muttering to himself. To stop this you will need to have another wait command that will keep him silent until the conversation ends. Implement this as above, with the time set to carry to the end of the speech.
Continue to set up movments for all your AIs (don’t forget the original actor), working your way down the box for each step and then moving on to the next step when it is full. A finished step will look something like this:
With a little time and effort, you should be able to set up a more natural conversation using this method to set it up.
Halting the conversation upon AI becoming alert.
One of the side effects of this method is that the speech tends to play to the end even after the AI’s have become alert. This is, however, correctable.
If an actor is given a second conversation to do, it automatically halts the first. Alert responses allow you to have the AI perform actions when they reach a chosen alert level, so by setting the alert response to trigger a second conversation you can automatically abort the speech.
First create a new button that will act as the trigger for the new conversation. Now select your AI and go to properties->add->AI->Responses->Alert response.
The box that it brings up is very similar to that of a conversation. Fill it out as follows:
Alert Level: The alertness stage needed to perform the action. I have mine set to High(3).Priority: How important the action is. Set to absolute to insure the action is performed imediatly.Response Step 1: Frob Object.Argument 1: Put the name of the button here.Argument 2: Leave empty.Argument 2: Leave Empty.
Now make a new conversation which is set up to make each of the AI make a small movement – this will abort both the speech and all movements that go with it. The conversation should look something like this:
Don’t forget to add a control device link from the button to the conversation.
If you test the conversation now, you should find that an alert AI aborts the speech – but, unless you alert actor 1, it starts again a few seconds later. To prevent this, create a destroy trap linked as a control device to your original conversation, and have the new button set as the control device for the destroy trap.
The one drawback in doing this is that, once the conversation is aborted via the button, it will not be repeated at any stage.
Halting the conversation when AI is blackjacked.
Blackjacking any actor will instantly abort their segment of the conversation, however all the other characters will continue with theirs. The solution to this fianl problem lies in S&R;s.
There is a “knockout” stimulus which causes the AI to react to the blackjack by being knocked out. This can be taken advantage of by adding a second response to this stimulus. Select your AI and go to properties->add->act/react->receptrons
When the box comes up chose “add”. Fill in the box as follows.
Object: The AI.Stimulus: Knockout.Min Intesity: 0.Max Intensity: No Max.Effect: Frob Object.Target: The button that aborts the conversation.
Now when the AI recieves a knockout stimulus (from blackjack or gas arrow), it will frob the button that aborts the conversation.
And with that you’re done.