📖Artificial Intelligence Wiki (English)
A repository of machine learning, data science, and artificial intelligence (AI) terms for individuals and businesses.
最后更新于
A repository of machine learning, data science, and artificial intelligence (AI) terms for individuals and businesses.
最后更新于
Whether you're looking to explore new concepts or brush up on your terminology, this wiki offers up-to-date information on key topics in data science, machine learning, and deep learning.
Not sure where to start? Check out our definition of MLOps to discover a modern approach to model training and deployment.
Artificial Intelligence is an umbrella term for a range of concepts and technologies that allow machines to exhibit human-like capabilities. Some common implementations include self-driving cars, human-impersonating chatbots, and facial recognition apps. A few recent breakthroughs have led to applications that don't just mimic human intelligence but go well above and beyond, performing tasks that are otherwise impossible for humans.
AI dates back to the 1950s and has been through several boom and bust cycles. Over the past few years, we've seen tremendous resurgence in investment and excitement in AI due to the culmination of three key ingredients:
Abundant and cheap parallel computation with GPUs
Growing data sets and collection techniques
Advancements in underlying algorithms -- especially the advent of a neural network-based approach called Deep Learning
AI powers applications used by hundreds of millions of people every day. Businesses are using AI to perform an almost infinite number of tasks, from implementing recommender systems for e-commerce apps to diagnosing cancer.
AI is in its infancy and as an early-stage technology it is rapidly changing and challenging to implement. To gain more widespread adoption, AI needs to overcome a number of hurdles. These obstacles generally fall into two primary areas:
A lack of best practices (MLOps) across the entire model lifecycle
Infrastructure complexity inherent in developing and productionizing models
Today, Data Scientists only spend around 25% of their time developing models -- the other 75% of their time is spent managing tooling and infrastructure.
"The biggest barrier to AI adoption is an infrastructure and tooling problem, not an algorithm problem." -- Dillon Erb, Paperspace CEO
These are the challenges that end-to-end AI platforms like Paperspace Gradient were built to solve.
Gradient enables teams to quickly develop, track, and deploy machine learning models from concept to production. The platform provides infrastructure automation and model lifecycle management with organization-wide visibility, reproducibility, and governance as first-class citizens.
For data scientists, Gradient provides the freedom to use familiar tools. Since Gradient provides DevOps support and resource orchestration, teams can focus on training algorithms and creating business value.
For organizations, Gradient reduces project costs by streamlining hardware resources and data science team productivity. The Kubernetes-native platform provides a unified ML hub that maximizes speed to deployment and time-to-value.
Gradient pioneered the concept of CI/CD for machine learning which is helping transform data driven organizations into model driven organizations. This is also known as agile ML.