The effective sonic interaction with sound synthesizers requires the continuous control of a high dimensional space. Further, the relationship between synthesis variables and timbre of the generated sound is typically complex or unknown to users. We introduced a generic and unsupervised mapping method based on machine listening and machine learning techniques, which addresses these challenges by providing a low-dimensional and perceptually related control space. The mapping was implemented in a fully automated system requiring little input from users. The time required for analysis and mapping computation is drastically reduced with the improved method and optimized implementation we present in this paper. We introduce the use of the extreme learning machines for the regression between control and timbre spaces, improving efficiency and accuracy. We also include an interactive approach for the analysis of the synthesizer sonic response, which is per-formed as users explore the parameters of the instrument. This work enables the computation of customized synthesis mappings through timbre spaces, reducing time and complexity to obtain a usable system.