Information-theoretic calculations
Given a list of bins, make a list of strings of those bin ranges
| Parameters: | bins : list_like 
  | 
|---|---|
| Returns: | bin_ranges : list 
 >>> bin_range_strings((0, 0.5, 1))
[‘0-0.5’, ‘0.5-1’]  | 
Makes a histogram of each column the provided binsize
| Parameters: | data : pandas.DataFrame 
 bins : iterable 
  | 
|---|---|
| Returns: | binned : pandas.DataFrame 
  | 
Jensen-Shannon divergence of features across phenotypes
| Parameters: | data : pandas.DataFrame 
 groupby : mappable 
 n_iter : int 
 n_bins : int 
  | 
|---|---|
| Returns: | jsd_df : pandas.DataFrame 
  | 
Find the entropy of each column of a dataframe
| Parameters: | binned : pandas.DataFrame 
 base : numeric 
  | 
|---|---|
| Returns: | entropy : pandas.Seires 
  | 
| Raises: | ValueError 
  | 
Finds the per-column JSD betwen dataframes p and q
Jensen-Shannon divergence of two probability distrubutions pandas dataframes, p and q. These distributions are usually created by running binify() on the dataframe.
| Parameters: | p : pandas.DataFrame 
 q : pandas.DataFrame 
  | 
|---|---|
| Returns: | jsd : pandas.Series 
  | 
| Raises: | ValueError 
  | 
Transform a tall JSD dataframe to a square matrix of mean JSDs
| Parameters: | jsd_df : pandas.DataFrame 
  | 
|---|---|
| Returns: | jsd_2d : pandas.DataFrame 
  | 
Kullback-Leiber divergence of two probability distributions pandas dataframes, p and q
| Parameters: | p : pandas.DataFrame 
 q : pandas.DataFrame 
  | 
|---|---|
| Returns: | kld : pandas.Series 
  | 
| Raises: | ValueError 
  | 
Notes
The input to this function must be probability distributions, not raw values. Otherwise, the output makes no sense.