scipy.spatial.distance.jensenshannon#
- scipy.spatial.distance.jensenshannon(p, q, base=None, *, axis=0, keepdims=False)[source]#
 Compute the Jensen-Shannon distance (metric) between two probability arrays. This is the square root of the Jensen-Shannon divergence.
The Jensen-Shannon distance between two probability vectors p and q is defined as,
\[\sqrt{\frac{D(p \parallel m) + D(q \parallel m)}{2}}\]where \(m\) is the pointwise mean of \(p\) and \(q\) and \(D\) is the Kullback-Leibler divergence.
This routine will normalize p and q if they don’t sum to 1.0.
- Parameters:
 - p(N,) array_like
 left probability vector
- q(N,) array_like
 right probability vector
- basedouble, optional
 the base of the logarithm used to compute the output if not given, then the routine uses the default base of scipy.stats.entropy.
- axisint, optional
 Axis along which the Jensen-Shannon distances are computed. The default is 0.
New in version 1.7.0.
- keepdimsbool, optional
 If this is set to True, the reduced axes are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array. Default is False.
New in version 1.7.0.
- Returns:
 - jsdouble or ndarray
 The Jensen-Shannon distances between p and q along the axis.
Notes
New in version 1.2.0.
Examples
>>> from scipy.spatial import distance >>> import numpy as np >>> distance.jensenshannon([1.0, 0.0, 0.0], [0.0, 1.0, 0.0], 2.0) 1.0 >>> distance.jensenshannon([1.0, 0.0], [0.5, 0.5]) 0.46450140402245893 >>> distance.jensenshannon([1.0, 0.0, 0.0], [1.0, 0.0, 0.0]) 0.0 >>> a = np.array([[1, 2, 3, 4], ... [5, 6, 7, 8], ... [9, 10, 11, 12]]) >>> b = np.array([[13, 14, 15, 16], ... [17, 18, 19, 20], ... [21, 22, 23, 24]]) >>> distance.jensenshannon(a, b, axis=0) array([0.1954288, 0.1447697, 0.1138377, 0.0927636]) >>> distance.jensenshannon(a, b, axis=1) array([0.1402339, 0.0399106, 0.0201815])