University of Wollongong
Browse

Rendering models for Immersive Voice Communications within Distributed Virtual Environment

Download (258.27 kB)
conference contribution
posted on 2024-11-15, 19:35 authored by Ying Peng Que, P Boustead, Farzad Safaei
This paper compares three possible rendering models for the provision of Immersive Voice Communications (IVCs) in Distributed Virtual Environments (DVEs) such as multiplayer online games. The common aim of these three rendering models is to create a personalised auditory scene for each listening avatar, consisting of a mix of the surrounding avatars' voices, positioned according to their positions in the virtual world. The first two rendering models are based on amplitude panning localisation and HRTF-based binaural localisation respectively. The computation cost of the latter is deemed too large to meet the identified processing power constraints. A computation reuse scheme was introduced in the third rendering model which, as shown in our simulation results, reduces significantly the computational cost of providing IVC using HRTF-based binaural localisation.

History

Citation

This article was originally published as: Que, Y. P., Boustead, P. & Safaei, F., Rendering models for Immersive Voice Communications within Distributed Virtual Environment, IEEE International Region 10 Conference (TENCON 2005), Melbourne, November 21-24 2005, 1-6. Copyright 2005 IEEE.

Parent title

IEEE Region 10 Annual International Conference, Proceedings/TENCON

Volume

2007

Language

English

RIS ID

12760

Usage metrics

    Categories

    Keywords

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC