The process of deriving mathematical models for increasingly intricate systems has become a challenging task. These models serve multiple purposes, such as predicting future behavior, identifying patterns, and controlling robots and virtual agents. The key question is whether “smart” algorithms can learn the laws governing the interaction between system variables in real-time. We propose the development of algorithms that autonomously construct models for general purposes. These algorithms are designed to process various data streams, including data from images, movements, sounds, electrodes, social networks, and human language. Unlike other artificial intelligence algorithms, the proposed algorithms capture granules of information to facilitate approximate reasoning. These models enhance their performance autonomously, drawing on past experiences and interactions with the environment and agents. We have laid the groundwork for instilling more intelligence in software and robots. Our goal is to push the boundaries of machine intelligence towards more realistic scenarios, assuming an evolving perception of the world and granules of information in abstract spaces.