Statistics is the science of description
and prediction. The more systems got complex more hardness of correct description
and prediction get serious. The latest gift of AI to statistics is machine learning.
Although AI had its roots back to even 17th century, ML is
not as old as its ancestors. Nature of statistics comes from **backed predictions
on
factual matters.** However, on the ML side things are not straightforward.
Although
there is a strongly correlation between **prediction** and **machine
learning,** this
correlation does not suggest a causation. Causation comes from their shared
ancestors: **artificial intelligence.**

History is full of people that ask the question: *why we need
it?* Not
very surprisingly they disappeared from the stage. In fact, same situation is
observed in ML, too. Some of the “future experts” didn’t even bother to understand
the capabilities of ML whereas some saw the potential. Strong point here is that the
correct question is *how can we use it?* Thus, asking and looking for potential
use of
the AI, and transposing the humanoid factors to AI resulted in tons of different use
of ML from data science to public health. Therefore, the correct approach is always
have to be **capacity solutions and potential enhancements,** not looking for
exact
meaning of newly found concepts.

The shared value of both AI and ML is prediction as described above.
Hence, the coincided concept is a more general one. Prediction by itself means
nothing, when put into a concept of data and/or paradigm, it becomes the only
solution of a complex problem. That is, descriptive analysis has the power of
**stating the correct sides,** where predictive side shows you the **possible
outcomes,
their roads, and alternative solutions.** Thus, prediction analysis with
correct tools
and discourse reveals what is unseen. The part where description and prediction meet
each other named as **decision.** That is, after statement of problem and listing
of
possible outcomes and action would be taken, namely decision. Subsequently, there is
no need to stress on the matter that they are, decision and prediction are *highly
correlated.*

The exact sense of ML slowly finds its way as we state the terms such
as description, prediction, and decision. Even a basic examination of the ML theory
shows that basic I/O systems are redefined by ML approach that after individual’s
firs few decisions AI learns if there exists a pattern and acts accordingly.
Decision theory suggest a probability and a utility. Thus, probability here is
defined by prediction by the system and utility is defined by past actions etc.
Hence, whether it is a humanoid or an AI element, a decision-taker views
**universal
induction** as a sole solution of the matter where abovementioned variables
are parted
into different pieces of tools and approaches such as Bayes theory for prediction
and Turing approach for utility examination.

Therefore, make it a simple I/O system or a complex model for any
problem. Both probability theory and decision-taking patterns are observed in
inductive initiations. This looks like a human intelligence testing model where the
observer’s ability to make correct decision are through induction and tools used are
probability and past knowledge on the matter. Thus, the 21st paradigm of AI suggests
that AI ability to make decisions becomes consistent, increases its probability of
choosing correct action, since it copies the human induction and action-taking
pattern. Sooner or later what becomes machine learning from Artificial Intelligence
will become deep learning since dominance of **predictive and descriptive
analysis** are
converges to more complex concepts of intelligence and decision taking.