Language modeling is one of the most powerful methods in information retrieval. Many language modeling based retrieval systems have been developed and tested on English collections. Hence, the evaluation of language modeling on collections of other languages is an interesting research issue. In this study, four different language modeling methods proposed by Hiemstra  have been evaluated on a large Persian collection of news archives. Furthermore, we study two different approaches that are proposed for tuning the Lambda parameter. Experimental results show that the performance of language models on Persian text improves after Lambda Tuning. More specifically Witten Bell method has the best results.