Posts

Showing posts with the label software

Top 9 Data Modeling Tools & Software 2021

Image
  Data modeling is the procedure of crafting a visual representation of an entire information system or portions of it in order to convey connections between data points and structures. The objective is to portray the types of data used and stored within the system, the ways the data can be organized and grouped, the relationships among these data types, and their attributes and formats. Data modeling uses abstraction to better understand and represent the nature of the flow of data within an enterprise-level information system.  The types of data models include: Conceptual data models. Logical data models. Physical data models. Database and information system design begins with the creation of these data models.  What is a Data Modeling Tool? A data modeling tool enables quick and efficient database design while minimizing human error. A data modeling software helps craft a high-performance database, generate reports that can be useful for stakeholders and create data de...

ThoughtWorks Decoder puts tech into a business context

Image
The tech landscape changes pretty fast. There are always new terms, techniques and tools emerging. But don't let tech be an enigma: ThoughtWorks Decoder is here to help Simply search for the term you're interested in, and we'll give you the lowdown on what it is, what it can do for your enterprise and what the potential drawbacks are. ThoughtWorks Decoder >>>

Model governance and model operations: building and deploying robust, production-ready machine learning models

Image
O'Reilly's surveys over the past couple of years have shown growing interest in machine learning (ML) among organizations from diverse industries. A few factors are contributing to this strong interest in implementing ML in products and services. First, the machine learning community has conducted groundbreaking research in many areas of interest to companies, and much of this research has been conducted out in the open via preprints and conference presentations. We are also beginning to see researchers share sample code written in popular open source libraries, and some even share pre-trained models. Organizations now also have more use cases and case studies from which to draw inspiration—no matter what industry or domain you are interested in, chances are there are many interesting ML applications you can learn from. Finally, modeling tools are improving, and automation is beginning to allow new users to tackle problems that used to be the province of experts. With the s...

How Facebook Scales Machine Learning

Image
The software and hardware considerations they made to successfully scale AI/ML infrastructure per an excellent talk giv en by  Yangqing Jia , Facebook’s Director of AI Infrastructure, at the Scaled Machine Learning Conference. Watch Video >>> Full Article >>>

Announcing Great Expectations v0.4 (We have SQL…!)

Based on feedback from the past month, we’ve revised, improved, and extended Great Expectations. 284 commits, 103 files changed, and 7 new contributors later, we’ve just released v0.4 ! Here’s what’s new. #1 Native SQL By far the most common request we received was the ability to run expectations natively in SQL. This was always on the roadmap. The community response made it our top priority. We’ve introduced a new class called SQLAlchemyDataset . It contains all* the same expectations as the original PandasDataset class, but instead of executing them against a DataFrame in local memory, it executes them against a database table using the SQLAlchemy core API. This gets us several wins, all at once: Since SQLAlchemy binds to most popular databases, we get immediate integration with all of those systems. We’ve already heard from teams developing against postgresql, Presto/Hive, and SQL Server. We expect to see lots more adoption on this front soon. Since ...

Software 2.0

(by Andrej Karpathy, Director of AI at Tesla) Neural networks are not just another classifier, they represent the beginning of a fundamental shift in how we write software. They are Software 2.0. The “classical stack” of Software 1.0 is what we’re all familiar with — it is written in languages such as Python, C++, etc. It consists of explicit instructions to the computer written by a programmer. By writing each line of code, the programmer is identifying a specific point in program space with some desirable behavior. In contrast, Software 2.0 is written in neural network weights. No human is involved in writing this code because there are a lot of weights (typical networks might have millions), and coding directly in weights is kind of hard (I tried). Instead, we specify some constraints on the behavior of a desirable program (e.g., a dataset of input output pairs of examples) and use the computational resources at our disposal to search the program space for a pr...