Analytics Blog

Category: Engineering

All About Deep Tech: Creating Scoring Engines with PFA

In recent blogs we have talked extensively about model operationalization and the support Chorus provides for the PFA (Portable Format for Analytics) standard. PFA provides a standardized way of representing analytical models, providing much needed model portability i.e. the ability to train a model on one data platform, serialize the model as PFA, and then… Read more »


All About Deep Tech: Model Operationalization

Model operationalization is a core component of effective data science, and is a key focus at Alpine Data. In previous blogs, I’ve written frequently about model ops, especially the support Chorus provides for exporting models using the PFA and PMML formats. However, what about scoring on data platforms that don’t yet provide PFA or PMML… Read more »


All About Deep Tech: Execution Frameworks

The “All About Deep Tech” blog series is about just what the title suggests: in-depth looks at the cool technology that makes Chorus run smoothly and how we leverage the latest and greatest in the big data industry to keep our customers ahead of the curve. If you missed our last post on AdaptiveML, be… Read more »


All About Deep Tech: Alpine AdaptiveML

As VP of Engineering at Alpine, my charter is to build a product that helps enterprise customers leverage the latest and greatest in data science and machine learning technology to create tangible business outcomes. In some cases, this takes form in integrating various open source algorithms into the product for emerging areas such as deep… Read more »


Using Hive to Perform Advanced Analytics in Hadoop

Hadoop data warehouses have continued to gain popularity with solutions such as Hive, Impala and HAWQ now frequently deployed at customer sites. Access to these warehouses is typically tightly controlled using Ranger or Sentry — ensuring comprehensive data security. Due to the ease with which data can be governed in Hive, an increasing number of… Read more »