Skip to content

Commit

Permalink
Updated README with feature engineering notes.
Browse files Browse the repository at this point in the history
  • Loading branch information
AutoViML committed Dec 24, 2020
1 parent 25cb711 commit 84c08f3
Show file tree
Hide file tree
Showing 8 changed files with 54 additions and 217 deletions.
76 changes: 51 additions & 25 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,9 +5,7 @@
Featurewiz is a new python library for selecting the best features in your data set fast!
(featurewiz logo created using Wix)
<p>Two methods are used in this version of featurewiz:<br>

1. SULOV -> SULOV means Searching for Uncorrelated List of Variables. The SULOV method is explained in this chart below. SULOV stands for: “Searching for Uncorrelated List Of Variables”

1. SULOV -> SULOV means Searching for Uncorrelated List of Variables. The SULOV method is explained in this chart below.
Here is a simple way of explaining how it works:
<ol>
<li>Find all the pairs of highly correlated variables exceeding a correlation threshold (say absolute(0.7)).
Expand All @@ -16,25 +14,35 @@ Here is a simple way of explaining how it works:
<li>What’s left is the ones with the highest Information scores and least correlation with each other.
</ol>


![sulov](SULOV.jpg)


2. Recursive XGBoost: Once SULOV has selected variables that have high mutual information scores with least less correlation amongst them, we use XGBoost to repeatedly find best features among the remaining variables after SULOV. The Recursive XGBoost method is explained in this chart below.
Once have done SULOV method, now select the best variables using XGBoost feature important but apply it recursively to smaller and smaller sets of data in your data set. This is how it works:

![xgboost](xgboost.jpg)

Here is how it works:
<ol>
<li>Select all variables in data set and the full data split into train and valid sets.
<li>Find top X features (could be 10) on train using valid for early stopping (to prevent over-fitting)
<li>Then take next set of vars and find top X
<li>Do this 5 times. Combine all selected features and de-duplicate them.
</ol>

3. <b>Performing Feature Engineering</b>: One of the gaps in open source AutoML tools and especially Auto_ViML has been the lack of feature engineering capabilities that high powered competitions like Kaggle required. The ability to create "interaction" variables or adding "group-by" features or "target-encoding" categorical variables was difficult and sifting through those hundreds of new features was painstaking and left only to "experts". Now there is some good news.
<br>
featurewiz (https://lnkd.in/eGep5uG) now enables you to add hundreds of such features at the click of a code. Set the "feature_engg" flag to "interactions", "groupby" or "target" and featurewiz will select the best encoders for each of those options and create hundreds (perhaps thousands) of features in one go. Not only that, it will use SULOV method and Recursive XGBoost to sift through those variables and find only the least correlated and most important features among them. All in one step!.<br>

![xgboost](xgboost.jpg)

3. Classification of variables by type: It automatically detects the different types of variables in your data set and converts them to numeric except date-time, NLP and large-text variables. These variables must be properly encoded and transformed (or embedded) into numeric form by you if you want them included in featurewiz selection.<br>

4. Best step after feature engineering: Featurewiz represents the next best step you can perform after doing some feature engineering on your own since you might have added some highly correlated or even wasteful features when you use some automated tools such as featuretools to perform feature engineering. With featurewiz as the last step before you do modeling, you can perform feature selection with featurewiz and the best and least number of features before doing more expensive training and inference.
4. <b>Building the simplest and most "interpretable" model</b>: Featurewiz represents the "next best" step you must perform after doing feature engineering since you might have added some highly correlated or even useless features when you use automated feature engineering. featurewiz ensures you have the least number of features needed to build a high performing or equivalent model.

<b>A WORD OF CAUTION:</b> Just because you can, doesn't mean you should. Make sure you understand feature engineered variables before you attempt to build your model any further. featurewiz displays the SULOV chart which can show you the 100's of newly created variables added to your dataset using featurewiz.
<br>
But you still have two problems:
1. How to interpret those newly created features?
2. Does the model overfit now on these many features?
<br>
Both are very important questions and you must be very careful using this feature_engg option in featurewiz. Otherwise, you can create a "garbage in, garbage out" problem. Caveat Emptor!
<br>
<p>To upgrade to the best, most stable and full-featured version always do the following: <br>
<code>Use $ pip install featurewiz --upgrade --ignore-installed</code><br>
or
Expand All @@ -60,13 +68,19 @@ To learn more about how featurewiz works under the hood, watch this [video](http
In most cases, featurewiz builds models with 20%-99% fewer features than your original data set with nearly the same or slightly lower performance (this is based on my trials. Your experience may vary).<br>
<p>
featurewiz is every Data Scientist's feature wizard that will:<ol>
<li><b>Automatically pre-process data</b>: you can send in your entire dataframe as is and featurewiz will classify and change/label encode categorical variables changes to help XGBoost processing. That way, you don't have to preprocess your data before using featurewiz<br>
<li><b>Assist you with variable classification</b>: featurewiz classifies variables automatically. This is very helpful when you have hundreds if not thousands of variables since it can readily identify which of those are numeric vs categorical vs NLP text vs date-time variables and so on.<br>
<li><b>Automatically pre-process data</b>: you can send in your entire dataframe "as is" and featurewiz will classify and change/label encode categorical variables changes to help XGBoost processing. It classifies variables as numeric or categorical or NLP or date-time variables automatically so it can use them correctly to model.<br><br>
<li><b>Perform feature engineering automatically</b>: The ability to create "interaction" variables or adding "group-by" features or "target-encoding" categorical variables is difficult and sifting through those hundreds of new features is painstaking and left only to "experts". Now, with featurewiz you can create hundreds or even thousands of new features with the click of a mouse. This is very helpful when you have a small number of features to start with. However, be careful with this option. You can very easily create a monster with this option.
<li><b>Perform feature reduction automatically</b>. When you have small data sets and you know your domain well, it is easy to perhaps do EDA and identify which variables are important. But when you have a very large data set with hundreds if not thousands of variables, selecting the best features from your model can mean the difference between a bloated and highly complex model or a simple model with the fewest and most information-rich features. featurewiz uses XGBoost repeatedly to perform feature selection. You must try it on your large data sets and compare!<br>
<li><b>Explain SULOV method graphically </b> using networkx library so you can see which variables are highly correlated to which ones and which of those have high or low mutual information scores automatically. Just set verbose = 2 to see the graph. <br>
</ol>
featurewiz is built using xgboost, numpy, pandas and matplotlib. It should run on most Python 3 Anaconda installations. You won't have to import any special
libraries other than "XGBoost" and "networkx" library. We use "networkx" library for interpretability. <br>But if you don't have these libraries, featurewiz will install those for you automatically.

<b>*** Notes of Gratitude ***</b>:<br>
<ol>
<li><b>featurewiz is built using xgboost, numpy, pandas and matplotlib</b>. It should run on most Python 3 Anaconda installations. You won't have to import any special libraries other than "XGBoost" and "networkx" library. </li>
<li><b>We use "networkx" library for charts and interpretability</b>. <br>But if you don't have these libraries, featurewiz will install those for you automatically.</li>
<li>Alex Lekov (https://github.com/Alex-Lekov/AutoML_Alex/tree/master/automl_alex) for his DataBunch module which is used by the tool.</li>
<li>Category Encoders library in Python : This is an amazing library. Make sure you read all about the encoders that featurewiz uses here: https://contrib.scikit-learn.org/category_encoders/index.html </li>
</ol>

## Install

Expand Down Expand Up @@ -109,17 +123,11 @@ from featurewiz import featurewiz
Load a data set (any CSV or text file) into a Pandas dataframe and give it the name of the target(s) variable. If you have more than one target, it will handle multi-label targets too. Just give it a list of variables in that case. If you don't have a dataframe, you can simply enter the name and path of the file to load into featurewiz:

```
features = featurewiz(
dataname,
target,
corr_limit=0.7,
verbose=2,
sep=",",
header=0)
```

Finally, it returns the list of variables selected.
featurewiz(dataname, target, corr_limit=0.7, verbose=0, sep=",", header=0,
test_data='', feature_engg='', category_encoders='',
```
Output: is a Tuple which contains the list of features selected, the dataframe modified with new features and the test data modified.
This list of selected features is ready for you to now to do further modeling.
featurewiz works on any Multi-Class, Multi-Label Data Set. So you can have as many target labels as you want.
Expand All @@ -136,10 +144,28 @@ You don't have to tell featurwiz whether it is a Regression or Classification pr
- `0` limited output. Great for running this silently and getting fast results.
- `1` more verbiage. Great for knowing how results were and making changes to flags in input.
- `2` SULOV charts and output. Great for finding out what happens under the hood for SULOV method.
`test_data`: If you want to transform test data in the same way you are transforming dataname, you can.
test_data could be the name of a datapath+filename or a dataframe. featurewiz will detect whether
your input is a filename or a dataframe and load it automatically. Default is empty string.
`feature_engg`: You can let featurewiz select its best encoders for your data set by settning this flag
for adding feature engineering. There are three choices. You can choose one, two or all three.
'interactions': This will add interaction features to your data such as x1*x2, x2*x3, x1**2, x2**2, etc.
'groupby': This will generate Group By features to your numeric vars by grouping all categorical vars.
'target': This will encode & transform all your categorical features using certain target encoders.
Default is empty string (which means no additional features)
`category_encoders`: Instead of above method, you can choose your own kind of category encoders from below.
Recommend you do not use more than two of these. Featurewiz will automatically select only two from your list.
Default is empty string (which means no encoding of your categorical features)
['HashingEncoder', 'SumEncoder', 'PolynomialEncoder', 'BackwardDifferenceEncoder',
'OneHotEncoder', 'HelmertEncoder', 'OrdinalEncoder', 'FrequencyEncoder', 'BaseNEncoder',
'TargetEncoder', 'CatBoostEncoder', 'WOEEncoder', 'JamesSteinEncoder']
**Return values**

If you don't want any feature_engg, then featurewiz will return just one thing:
- `features`: the fewest number of features in your model to make it perform well
Otherwise, Featurewiz can output either one dataframe or two depending on what you send inside as input.
1. trainm: modified train dataframe is the dataframe that is modified with engineered and selected features from dataname.
2. testm: modified test dataframe is the dataframe that is modified with engineered and selected features from test_data
## Maintainers
Expand Down
Loading

0 comments on commit 84c08f3

Please sign in to comment.