Member-only story
Evolution of Auto-encoders
With an abundance of data in today’s world, it really becomes important to not only store this data in compressed format but have some mechanism to transmit and compute as well in shortest dimensions possible. This could result in significant saving of cost in many formats. Where Storage and Retrieval is a concern for data engineers and transmission of data for network experts, computation is a field which everyone else is most interested in.
Since we know that some modalities( such as image, audio, video and text ) of data are too large to be processed, it becomes need of the hour to find some representation of this data that is more syntactically crisp( has shorter dimension ), semantically meaningful and could convey the same amount of information as the original data has. This is one of the prominent research domain that AI Researchers have been focusing on since state-of-the-art deep learning models work with various modalities.
The discussion so far boils down to finding some latent space for the original data in hand. “Latent” comes from a latin word which means “to lie hidden”. Also called as hidden space is a compressed dimensional space beneath the original data(which exist in probably higher dimension), but focuses on important and semantically meaningful features of data in its representation.