Skip to content

dyce package reference

dyce provides several core primitives:

  • H – histograms (outcomes or individual dice)
  • P – collections of histograms (pools)
  • R – scalars, histograms, pools, operators, etc. for assembling roller trees (see dyce.r for details)

H (Mapping, Generic)

An immutable mapping for use as a histogram which supports arithmetic operations. This is useful for modeling discrete outcomes, like individual dice. H objects encode finite discrete probability distributions as integer counts without any denominator.

Info

The lack of an explicit denominator is intentional and has two benefits. First, a denominator is redundant. Without it, one never has to worry about probabilities summing to one (e.g., via miscalculation, floating point error, etc.). Second (and perhaps more importantly), sometimes one wants to have an insight into non-reduced counts, not just probabilities. If needed, probabilities can always be derived, as shown below.

The initializer takes a single parameter, items. In its most explicit form, items maps outcome values to counts.

Modeling a single six-sided die (1d6) can be expressed as:

1
2
>>> from dyce import H
>>> d6 = H({1: 1, 2: 1, 3: 1, 4: 1, 5: 1, 6: 1})

An iterable of pairs can also be used (similar to dict).

1
2
>>> d6 == H(((1, 1), (2, 1), (3, 1), (4, 1), (5, 1), (6, 1)))
True

Two shorthands are provided. If items is an iterable of numbers, counts of 1 are assumed.

1
2
>>> d6 == H((1, 2, 3, 4, 5, 6))
True

Repeated items are accumulated, as one would expect.

1
2
>>> H((2, 3, 3, 4, 4, 5))
H({2: 1, 3: 2, 4: 2, 5: 1})

If items is an integer, it is shorthand for creating a sequential range \([{1} .. {items}]\) (or \([{items} .. {-1}]\) if items is negative).

1
2
>>> d6 == H(6)
True

Histograms are maps, so we can test equivalence against other maps.

1
2
>>> H(6) == {1: 1, 2: 1, 3: 1, 4: 1, 5: 1, 6: 1}
True

Simple indexes can be used to look up an outcome’s count.

1
2
>>> H((2, 3, 3, 4, 4, 5))[3]
2

Most arithmetic operators are supported and do what one would expect. If the operand is a number, the operator applies to the outcomes.

1
2
>>> d6 + 4
H({5: 1, 6: 1, 7: 1, 8: 1, 9: 1, 10: 1})
1
2
3
4
5
6
>>> d6 * -1
H({-6: 1, -5: 1, -4: 1, -3: 1, -2: 1, -1: 1})
>>> d6 * -1 == -d6
True
>>> d6 * -1 == H(-6)
True

If the operand is another histogram, combinations are computed. Modeling the sum of two six-sided dice (2d6) can be expressed as:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
>>> d6 + d6
H({2: 1, 3: 2, 4: 3, 5: 4, 6: 5, 7: 6, 8: 5, 9: 4, 10: 3, 11: 2, 12: 1})
>>> print((d6 + d6).format())
avg |    7.00
std |    2.42
var |    5.83
  2 |   2.78% |#
  3 |   5.56% |##
  4 |   8.33% |####
  5 |  11.11% |#####
  6 |  13.89% |######
  7 |  16.67% |########
  8 |  13.89% |######
  9 |  11.11% |#####
 10 |   8.33% |####
 11 |   5.56% |##
 12 |   2.78% |#

To sum \({n}\) identical histograms, the matrix multiplication operator (@) provides a shorthand.

1
2
>>> 3@d6 == d6 + d6 + d6
True

The len built-in function can be used to show the number of distinct outcomes.

1
2
>>> len(2@d6)
11

The total property can be used to compute the total number of combinations and each outcome’s probability.

1
2
3
4
5
>>> from fractions import Fraction
>>> (2@d6).total
36
>>> [(outcome, Fraction(count, (2@d6).total)) for outcome, count in (2@d6).items()]
[(2, Fraction(1, 36)), (3, Fraction(1, 18)), (4, Fraction(1, 12)), (5, Fraction(1, 9)), (6, Fraction(5, 36)), (7, Fraction(1, 6)), ..., (12, Fraction(1, 36))]

Histograms provide common comparators (e.g., eq ne, etc.). One way to count how often a first six-sided die shows a different face than a second is:

1
2
3
4
5
6
7
8
>>> d6.ne(d6)
H({False: 6, True: 30})
>>> print(d6.ne(d6).format())
avg |    0.83
std |    0.37
var |    0.14
  0 |  16.67% |########
  1 |  83.33% |#########################################

Or, how often a first six-sided die shows a face less than a second is:

1
2
3
4
5
6
7
8
>>> d6.lt(d6)
H({False: 21, True: 15})
>>> print(d6.lt(d6).format())
avg |    0.42
std |    0.49
var |    0.24
  0 |  58.33% |#############################
  1 |  41.67% |####################

Or how often at least one 2 will show when rolling four six-sided dice:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
>>> d6_eq2 = d6.eq(2) ; d6_eq2  # how often a 2 shows on a single six-sided die
H({False: 5, True: 1})
>>> 4@d6_eq2  # count of 2s showing on 4d6
H({0: 625, 1: 500, 2: 150, 3: 20, 4: 1})
>>> (4@d6_eq2).ge(1)  # how often that count is at least one
H({False: 625, True: 671})
>>> print((4@d6_eq2).ge(1).format())
avg |    0.52
std |    0.50
var |    0.25
  0 |  48.23% |########################
  1 |  51.77% |#########################

Mind your parentheses

Parentheses are often necessary to enforce the desired order of operations. This is most often an issue with the @ operator, because it behaves differently than the d operator in most dedicated grammars. More specifically, in Python, @ has a lower precedence than . and [].

1
2
3
4
5
6
7
8
>>> 2@d6[7]  # type: ignore [operator]
Traceback (most recent call last):
  ...
KeyError: 7
>>> 2@d6.le(7)  # probably not what was intended
H({2: 36})
>>> 2@d6.le(7) == 2@(d6.le(7))
True
1
2
3
4
5
6
>>> (2@d6)[7]
6
>>> (2@d6).le(7)
H({False: 15, True: 21})
>>> 2@d6.le(7) == (2@d6).le(7)
False

Counts are generally accumulated without reduction. To reduce, call the lowest_terms method.

1
2
3
4
>>> d6.ge(4)
H({False: 3, True: 3})
>>> d6.ge(4).lowest_terms()
H({False: 1, True: 1})

Testing equivalence implicitly performs reductions of operands.

1
2
>>> d6.ge(4) == d6.ge(4).lowest_terms()
True
Source code in dyce/h.py
class H(_MappingT):
    r"""
    An immutable mapping for use as a histogram which supports arithmetic operations.
    This is useful for modeling discrete outcomes, like individual dice. ``#!python H``
    objects encode finite discrete probability distributions as integer counts without
    any denominator.

    !!! info

        The lack of an explicit denominator is intentional and has two benefits. First,
        a denominator is redundant. Without it, one never has to worry about
        probabilities summing to one (e.g., via miscalculation, floating point error,
        etc.). Second (and perhaps more importantly), sometimes one wants to have an
        insight into non-reduced counts, not just probabilities. If needed,
        probabilities can always be derived, as shown below.

    The [initializer][dyce.h.H.__init__] takes a single parameter, *items*. In its most
    explicit form, *items* maps outcome values to counts.

    Modeling a single six-sided die (``1d6``) can be expressed as:

    ``` python
    >>> from dyce import H
    >>> d6 = H({1: 1, 2: 1, 3: 1, 4: 1, 5: 1, 6: 1})

    ```

    An iterable of pairs can also be used (similar to ``#!python dict``).

    ``` python
    >>> d6 == H(((1, 1), (2, 1), (3, 1), (4, 1), (5, 1), (6, 1)))
    True

    ```

    Two shorthands are provided. If *items* is an iterable of numbers, counts of 1 are
    assumed.

    ``` python
    >>> d6 == H((1, 2, 3, 4, 5, 6))
    True

    ```

    Repeated items are accumulated, as one would expect.

    ``` python
    >>> H((2, 3, 3, 4, 4, 5))
    H({2: 1, 3: 2, 4: 2, 5: 1})

    ```

    If *items* is an integer, it is shorthand for creating a sequential range $[{1} ..
    {items}]$ (or $[{items} .. {-1}]$ if *items* is negative).

    ``` python
    >>> d6 == H(6)
    True

    ```

    Histograms are maps, so we can test equivalence against other maps.

    ``` python
    >>> H(6) == {1: 1, 2: 1, 3: 1, 4: 1, 5: 1, 6: 1}
    True

    ```

    Simple indexes can be used to look up an outcome’s count.

    ``` python
    >>> H((2, 3, 3, 4, 4, 5))[3]
    2

    ```

    Most arithmetic operators are supported and do what one would expect. If the operand
    is a number, the operator applies to the outcomes.

    ``` python
    >>> d6 + 4
    H({5: 1, 6: 1, 7: 1, 8: 1, 9: 1, 10: 1})

    ```

    ``` python
    >>> d6 * -1
    H({-6: 1, -5: 1, -4: 1, -3: 1, -2: 1, -1: 1})
    >>> d6 * -1 == -d6
    True
    >>> d6 * -1 == H(-6)
    True

    ```

    If the operand is another histogram, combinations are computed. Modeling the sum of
    two six-sided dice (``2d6``) can be expressed as:

    ``` python
    >>> d6 + d6
    H({2: 1, 3: 2, 4: 3, 5: 4, 6: 5, 7: 6, 8: 5, 9: 4, 10: 3, 11: 2, 12: 1})
    >>> print((d6 + d6).format())
    avg |    7.00
    std |    2.42
    var |    5.83
      2 |   2.78% |#
      3 |   5.56% |##
      4 |   8.33% |####
      5 |  11.11% |#####
      6 |  13.89% |######
      7 |  16.67% |########
      8 |  13.89% |######
      9 |  11.11% |#####
     10 |   8.33% |####
     11 |   5.56% |##
     12 |   2.78% |#

    ```

    To sum ${n}$ identical histograms, the matrix multiplication operator (``@``)
    provides a shorthand.

    ``` python
    >>> 3@d6 == d6 + d6 + d6
    True

    ```

    The ``#!python len`` built-in function can be used to show the number of distinct
    outcomes.

    ``` python
    >>> len(2@d6)
    11

    ```

    The [``total`` property][dyce.h.H.total] can be used to compute the total number of
    combinations and each outcome’s probability.

    ``` python
    >>> from fractions import Fraction
    >>> (2@d6).total
    36
    >>> [(outcome, Fraction(count, (2@d6).total)) for outcome, count in (2@d6).items()]
    [(2, Fraction(1, 36)), (3, Fraction(1, 18)), (4, Fraction(1, 12)), (5, Fraction(1, 9)), (6, Fraction(5, 36)), (7, Fraction(1, 6)), ..., (12, Fraction(1, 36))]

    ```

    Histograms provide common comparators (e.g., [``eq``][dyce.h.H.eq]
    [``ne``][dyce.h.H.ne], etc.). One way to count how often a first six-sided die
    shows a different face than a second is:

    ``` python
    >>> d6.ne(d6)
    H({False: 6, True: 30})
    >>> print(d6.ne(d6).format())
    avg |    0.83
    std |    0.37
    var |    0.14
      0 |  16.67% |########
      1 |  83.33% |#########################################

    ```

    Or, how often a first six-sided die shows a face less than a second is:

    ``` python
    >>> d6.lt(d6)
    H({False: 21, True: 15})
    >>> print(d6.lt(d6).format())
    avg |    0.42
    std |    0.49
    var |    0.24
      0 |  58.33% |#############################
      1 |  41.67% |####################

    ```

    Or how often at least one ``#!python 2`` will show when rolling four six-sided dice:

    ``` python
    >>> d6_eq2 = d6.eq(2) ; d6_eq2  # how often a 2 shows on a single six-sided die
    H({False: 5, True: 1})
    >>> 4@d6_eq2  # count of 2s showing on 4d6
    H({0: 625, 1: 500, 2: 150, 3: 20, 4: 1})
    >>> (4@d6_eq2).ge(1)  # how often that count is at least one
    H({False: 625, True: 671})
    >>> print((4@d6_eq2).ge(1).format())
    avg |    0.52
    std |    0.50
    var |    0.25
      0 |  48.23% |########################
      1 |  51.77% |#########################

    ```

    !!! bug "Mind your parentheses"

        Parentheses are often necessary to enforce the desired order of operations. This
        is most often an issue with the ``#!python @`` operator, because it behaves
        differently than the ``d`` operator in most dedicated grammars. More
        specifically, in Python, ``#!python @`` has a [lower
        precedence](https://docs.python.org/3/reference/expressions.html#operator-precedence)
        than ``#!python .`` and ``#!python […]``.

        ``` python
        >>> 2@d6[7]  # type: ignore [operator]
        Traceback (most recent call last):
          ...
        KeyError: 7
        >>> 2@d6.le(7)  # probably not what was intended
        H({2: 36})
        >>> 2@d6.le(7) == 2@(d6.le(7))
        True

        ```

        ``` python
        >>> (2@d6)[7]
        6
        >>> (2@d6).le(7)
        H({False: 15, True: 21})
        >>> 2@d6.le(7) == (2@d6).le(7)
        False

        ```

    Counts are generally accumulated without reduction. To reduce, call the
    [``lowest_terms`` method][dyce.h.H.lowest_terms].

    ``` python
    >>> d6.ge(4)
    H({False: 3, True: 3})
    >>> d6.ge(4).lowest_terms()
    H({False: 1, True: 1})

    ```

    Testing equivalence implicitly performs reductions of operands.

    ``` python
    >>> d6.ge(4) == d6.ge(4).lowest_terms()
    True

    ```
    """
    __slots__: Union[str, Iterable[str]] = ("_h", "_simple_init")

    # ---- Initializer -----------------------------------------------------------------

    @beartype
    def __init__(self, items: _SourceT) -> None:
        r"Initializer."
        super().__init__()
        self._simple_init: Optional[int] = None
        tmp: Counter[RealLike] = counter()

        if isinstance(items, MappingC):
            items = items.items()

        if isinstance(items, SupportsInt):
            if items != 0:
                self._simple_init = as_int(items)
                outcome_range = range(
                    self._simple_init,
                    0,
                    1 if self._simple_init < 0 else -1,  # count toward zero
                )

                if isinstance(items, RealLike):
                    outcome_type = type(items)
                    tmp.update({outcome_type(i): 1 for i in outcome_range})
                else:
                    tmp.update({i: 1 for i in outcome_range})
        elif isinstance(items, HableT):
            tmp.update(items.h())
        elif isinstance(items, IterableC):
            # items is either an Iterable[RealLike] or an Iterable[Tuple[RealLike,
            # SupportsInt]] (although this technically supports Iterable[Union[RealLike,
            # Tuple[RealLike, SupportsInt]]])
            for item in items:
                if isinstance(item, tuple):
                    outcome, count = item
                    tmp[outcome] += as_int(count)
                else:
                    tmp[item] += 1
        else:
            raise ValueError(f"unrecognized initializer {items}")

        # Sort and omit zero counts. As of Python 3.7, insertion order of keys is
        # preserved.
        self._h: _MappingT = {
            outcome: tmp[outcome]
            for outcome in sorted_outcomes(tmp)
            if tmp[outcome] != 0
        }

    # ---- Overrides -------------------------------------------------------------------

    @beartype
    def __repr__(self) -> str:
        if self._simple_init is not None:
            arg = str(self._simple_init)
        elif sys.version_info >= (3, 8):
            arg = pformat(self._h, sort_dicts=False)
        else:
            arg = dict.__repr__(self._h)

        return f"{type(self).__name__}({arg})"

    @beartype
    def __eq__(self, other) -> bool:
        if isinstance(other, HableT):
            return __eq__(self, other.h())
        elif isinstance(other, H):
            return __eq__(self.lowest_terms()._h, other.lowest_terms()._h)
        else:
            return super().__eq__(other)

    @beartype
    def __ne__(self, other) -> bool:
        if isinstance(other, HableT):
            return __ne__(self, other.h())
        elif isinstance(other, H):
            return not __eq__(self, other)
        else:
            return super().__ne__(other)

    @beartype
    def __hash__(self) -> int:
        return hash(frozenset(self._lowest_terms()))

    @beartype
    def __len__(self) -> int:
        return len(self._h)

    @beartype
    def __getitem__(self, key: RealLike) -> int:
        return __getitem__(self._h, key)

    @beartype
    def __iter__(self) -> Iterator[RealLike]:
        return iter(self._h)

    @beartype
    def __add__(self, other: _OperandT) -> H:
        try:
            return self.map(__add__, other)
        except NotImplementedError:
            return NotImplemented

    @beartype
    def __radd__(self, other: RealLike) -> H:
        try:
            return self.rmap(other, __add__)
        except NotImplementedError:
            return NotImplemented

    @beartype
    def __sub__(self, other: _OperandT) -> H:
        try:
            return self.map(__sub__, other)
        except NotImplementedError:
            return NotImplemented

    @beartype
    def __rsub__(self, other: RealLike) -> H:
        try:
            return self.rmap(other, __sub__)
        except NotImplementedError:
            return NotImplemented

    @beartype
    def __mul__(self, other: _OperandT) -> H:
        try:
            return self.map(__mul__, other)
        except NotImplementedError:
            return NotImplemented

    @beartype
    def __rmul__(self, other: RealLike) -> H:
        try:
            return self.rmap(other, __mul__)
        except NotImplementedError:
            return NotImplemented

    @beartype
    def __matmul__(self, other: SupportsInt) -> H:
        try:
            other = as_int(other)
        except TypeError:
            return NotImplemented

        if other < 0:
            raise ValueError("argument cannot be negative")
        else:
            return sum_h(repeat(self, other))

    @beartype
    def __rmatmul__(self, other: SupportsInt) -> H:
        return self.__matmul__(other)

    @beartype
    def __truediv__(self, other: _OperandT) -> H:
        try:
            return self.map(__truediv__, other)
        except NotImplementedError:
            return NotImplemented

    @beartype
    def __rtruediv__(self, other: RealLike) -> H:
        try:
            return self.rmap(other, __truediv__)
        except NotImplementedError:
            return NotImplemented

    @beartype
    def __floordiv__(self, other: _OperandT) -> H:
        try:
            return self.map(__floordiv__, other)
        except NotImplementedError:
            return NotImplemented

    @beartype
    def __rfloordiv__(self, other: RealLike) -> H:
        try:
            return self.rmap(other, __floordiv__)
        except NotImplementedError:
            return NotImplemented

    @beartype
    def __mod__(self, other: _OperandT) -> H:
        try:
            return self.map(__mod__, other)
        except NotImplementedError:
            return NotImplemented

    @beartype
    def __rmod__(self, other: RealLike) -> H:
        try:
            return self.rmap(other, __mod__)
        except NotImplementedError:
            return NotImplemented

    @beartype
    def __pow__(self, other: _OperandT) -> H:
        try:
            return self.map(__pow__, other)
        except NotImplementedError:
            return NotImplemented

    @beartype
    def __rpow__(self, other: RealLike) -> H:
        try:
            return self.rmap(other, __pow__)
        except NotImplementedError:
            return NotImplemented

    @beartype
    def __and__(self, other: Union[SupportsInt, "H", "HableT"]) -> H:
        try:
            if isinstance(other, SupportsInt):
                other = as_int(other)

            return self.map(__and__, other)
        except (NotImplementedError, TypeError):
            return NotImplemented

    @beartype
    def __rand__(self, other: SupportsInt) -> H:
        try:
            return self.rmap(as_int(other), __and__)
        except (NotImplementedError, TypeError):
            return NotImplemented

    @beartype
    def __xor__(self, other: Union[SupportsInt, "H", "HableT"]) -> H:
        try:
            if isinstance(other, SupportsInt):
                other = as_int(other)

            return self.map(__xor__, other)
        except NotImplementedError:
            return NotImplemented

    @beartype
    def __rxor__(self, other: SupportsInt) -> H:
        try:
            return self.rmap(as_int(other), __xor__)
        except (NotImplementedError, TypeError):
            return NotImplemented

    @beartype
    def __or__(self, other: Union[SupportsInt, "H", "HableT"]) -> H:
        try:
            if isinstance(other, SupportsInt):
                other = as_int(other)

            return self.map(__or__, other)
        except (NotImplementedError, TypeError):
            return NotImplemented

    @beartype
    def __ror__(self, other: SupportsInt) -> H:
        try:
            return self.rmap(as_int(other), __or__)
        except (NotImplementedError, TypeError):
            return NotImplemented

    @beartype
    def __neg__(self) -> H:
        return self.umap(__neg__)

    @beartype
    def __pos__(self) -> H:
        return self.umap(__pos__)

    @beartype
    def __abs__(self) -> H:
        return self.umap(__abs__)

    @beartype
    def __invert__(self) -> H:
        return self.umap(__invert__)

    @beartype
    def counts(self) -> ValuesView[int]:
        r"""
        More descriptive synonym for the [``values`` method][dyce.h.H.values].
        """
        return self._h.values()

    @beartype
    def items(self) -> ItemsView[RealLike, int]:
        return self._h.items()

    @beartype
    def keys(self) -> KeysView[RealLike]:
        return self.outcomes()

    @beartype
    def outcomes(self) -> KeysView[RealLike]:
        r"""
        More descriptive synonym for the [``keys`` method][dyce.h.H.keys].
        """
        return self._h.keys()

    @beartype
    def values(self) -> ValuesView[int]:
        return self.counts()

    # ---- Properties ------------------------------------------------------------------

    @property
    def total(self) -> int:
        r"""
        !!! warning "Experimental"

            This propertyshould be considered experimental and may change or disappear
            in future versions.

        Equivalent to ``#!python sum(self.counts())``.
        """

        @experimental
        def _total() -> int:
            return sum(self.counts())

        return _total()

    # ---- Methods ---------------------------------------------------------------------

    @classmethod
    @beartype
    def foreach(
        cls,
        dependent_term: Callable[..., Union["H", RealLike]],
        **independent_sources: _SourceT,
    ) -> H:
        r"""
        !!! warning "Experimental"

            This method should be considered experimental and may change or disappear in
            future versions.

        Calls ``#!python dependent_term`` for each set of outcomes from the product of
        ``independent_sources`` and accumulates the results. This is useful for
        resolving dependent probabilities. Returned histograms are always reduced to
        their lowest terms.

        For example rolling a d20, re-rolling a ``#!python 1`` if it comes up, and
        keeping the result might be expressed as[^1]:

        [^1]:

            This is primarily for illustration. [``H.substitute``][dyce.h.H.substitute]
            is often better suited to cases involving re-rolling a single independent
            term such as this one.

        ``` python
        >>> d20 = H(20)

        >>> def reroll_one_dependent_term(d20_outcome):
        ...   if d20_outcome == 1:
        ...     return d20
        ...   else:
        ...     return d20_outcome

        >>> H.foreach(reroll_one_dependent_term, d20_outcome=d20)
        H({1: 1,
         2: 21,
         3: 21,
         ...,
         19: 21,
         20: 21})

        ```

        The ``#!python foreach`` class method merely wraps *dependent_term* and calls
        [``P.foreach``][dyce.p.P.foreach]. In doing so, it imposes a very modest
        overhead that is negligible in most cases.

        ``` python
        --8<-- "docs/assets/perf_foreach.txt"
        ```

        <details>
        <summary>Source: <a href="https://github.com/posita/dyce/blob/latest/docs/assets/perf_foreach.ipy"><code>perf_foreach.ipy</code></a></summary>

        ``` python
        --8<-- "docs/assets/perf_foreach.ipy"
        ```
        </details>
        """
        from dyce import P

        def _dependent_term(**roll_kw):
            outcome_kw: Dict[str, RealLike] = {}

            for key, roll in roll_kw.items():
                assert isinstance(roll, tuple)
                assert len(roll) == 1
                outcome_kw[key] = roll[0]

            return dependent_term(**outcome_kw)

        return P.foreach(_dependent_term, **independent_sources)

    @beartype
    def map(self, bin_op: _BinaryOperatorT, right_operand: _OperandT) -> H:
        r"""
        Applies *bin_op* to each outcome of the histogram as the left operand and
        *right_operand* as the right. Shorthands exist for many arithmetic operators and
        comparators.

        ``` python
        >>> import operator
        >>> d6 = H(6)
        >>> d6.map(operator.__add__, d6)
        H({2: 1, 3: 2, 4: 3, 5: 4, 6: 5, 7: 6, 8: 5, 9: 4, 10: 3, 11: 2, 12: 1})
        >>> d6.map(operator.__add__, d6) == d6 + d6
        True

        ```

        ``` python
        >>> d6.map(operator.__pow__, 2)
        H({1: 1, 4: 1, 9: 1, 16: 1, 25: 1, 36: 1})
        >>> d6.map(operator.__pow__, 2) == d6 ** 2
        True

        ```

        ``` python
        >>> d6.map(operator.__gt__, 3)
        H({False: 3, True: 3})
        >>> d6.map(operator.__gt__, 3) == d6.gt(3)
        True

        ```
        """
        if isinstance(right_operand, HableT):
            right_operand = right_operand.h()

        if isinstance(right_operand, H):
            return type(self)(
                (bin_op(s, o), self[s] * right_operand[o])
                for s, o in product(self, right_operand)
            )
        else:
            return type(self)(
                (bin_op(outcome, right_operand), count)
                for outcome, count in self.items()
            )

    @beartype
    def rmap(self, left_operand: RealLike, bin_op: _BinaryOperatorT) -> H:
        r"""
        Analogous to the [``map`` method][dyce.h.H.map], but where the caller supplies
        *left_operand*.

        ``` python
        >>> import operator
        >>> d6 = H(6)
        >>> d6.rmap(2, operator.__pow__)
        H({2: 1, 4: 1, 8: 1, 16: 1, 32: 1, 64: 1})
        >>> d6.rmap(2, operator.__pow__) == 2 ** d6
        True

        ```

        !!! note

            The positions of *left_operand* and *bin_op* are different from
            [``map`` method][dyce.h.H.map]. This is intentional and serves as a reminder
            of operand ordering.
        """
        return type(self)(
            (bin_op(left_operand, outcome), count) for outcome, count in self.items()
        )

    @beartype
    def umap(self, un_op: _UnaryOperatorT) -> H:
        r"""
        Applies *un_op* to each outcome of the histogram.

        ``` python
        >>> import operator
        >>> H(6).umap(operator.__neg__)
        H(-6)

        ```

        ``` python
        >>> H(4).umap(lambda outcome: (-outcome) ** outcome)
        H({-27: 1, -1: 1, 4: 1, 256: 1})

        ```
        """
        h = type(self)((un_op(outcome), count) for outcome, count in self.items())

        if self._simple_init is not None:
            simple_init = un_op(self._simple_init)

            if isinstance(simple_init, SupportsInt):
                h_simple = type(self)(simple_init)

                if h_simple == h:
                    return h_simple

        return h

    @beartype
    def lt(self, other: _OperandT) -> H:
        r"""
        Shorthand for ``#!python self.map(operator.__lt__, other).umap(bool)``.

        ``` python
        >>> H(6).lt(3)
        H({False: 4, True: 2})

        ```

        See the [``map``][dyce.h.H.map] and [``umap``][dyce.h.H.umap] methods.
        """
        return self.map(__lt__, other).umap(bool)

    @beartype
    def le(self, other: _OperandT) -> H:
        r"""
        Shorthand for ``#!python self.map(operator.__le__, other).umap(bool)``.

        ``` python
        >>> H(6).le(3)
        H({False: 3, True: 3})

        ```

        See the [``map``][dyce.h.H.map] and [``umap``][dyce.h.H.umap] methods.
        """
        return self.map(__le__, other).umap(bool)

    @beartype
    def eq(self, other: _OperandT) -> H:
        r"""
        Shorthand for ``#!python self.map(operator.__eq__, other).umap(bool)``.

        ``` python
        >>> H(6).eq(3)
        H({False: 5, True: 1})

        ```

        See the [``map``][dyce.h.H.map] and [``umap``][dyce.h.H.umap] methods.
        """
        return self.map(__eq__, other).umap(bool)

    @beartype
    def ne(self, other: _OperandT) -> H:
        r"""
        Shorthand for ``#!python self.map(operator.__ne__, other).umap(bool)``.

        ``` python
        >>> H(6).ne(3)
        H({False: 1, True: 5})

        ```

        See the [``map``][dyce.h.H.map] and [``umap``][dyce.h.H.umap] methods.
        """
        return self.map(__ne__, other).umap(bool)

    @beartype
    def gt(self, other: _OperandT) -> H:
        r"""
        Shorthand for ``#!python self.map(operator.__gt__, other).umap(bool)``.

        ``` python
        >>> H(6).gt(3)
        H({False: 3, True: 3})

        ```

        See the [``map``][dyce.h.H.map] and [``umap``][dyce.h.H.umap] methods.
        """
        return self.map(__gt__, other).umap(bool)

    @beartype
    def ge(self, other: _OperandT) -> H:
        r"""
        Shorthand for ``#!python self.map(operator.__ge__, other).umap(bool)``.

        ``` python
        >>> H(6).ge(3)
        H({False: 2, True: 4})

        ```

        See the [``map``][dyce.h.H.map] and [``umap``][dyce.h.H.umap] methods.
        """
        return self.map(__ge__, other).umap(bool)

    @beartype
    def is_even(self) -> H:
        r"""
        Equivalent to ``#!python self.umap(dyce.types.is_even)``.

        ``` python
        >>> H((-4, -2, 0, 1, 2, 3)).is_even()
        H({False: 2, True: 4})

        ```

        See the [``umap`` method][dyce.h.H.umap].
        """
        return self.umap(is_even)

    @beartype
    def is_odd(self) -> H:
        r"""
        Equivalent to ``#!python self.umap(dyce.types.is_odd)``.

        ``` python
        >>> H((-4, -2, 0, 1, 2, 3)).is_odd()
        H({False: 4, True: 2})

        ```

        See the [``umap`` method][dyce.h.H.umap].
        """
        return self.umap(is_odd)

    @beartype
    def accumulate(self, other: _SourceT) -> H:
        r"""
        Accumulates counts.

        ``` python
        >>> H(4).accumulate(H(6))
        H({1: 2, 2: 2, 3: 2, 4: 2, 5: 1, 6: 1})

        ```
        """
        if isinstance(other, MappingC):
            other = other.items()
        elif not isinstance(other, IterableC):
            other = cast(Iterable[RealLike], (other,))

        return type(self)(chain(self.items(), cast(Iterable, other)))

    @experimental
    @beartype
    def exactly_k_times_in_n(
        self,
        outcome: RealLike,
        n: SupportsInt,
        k: SupportsInt,
    ) -> int:
        r"""
        !!! warning "Experimental"

            This method should be considered experimental and may change or disappear in
            future versions.

        Computes and returns the probability distribution where *outcome* appears
        exactly *k* times among ``#!python n@self``.

        ``` python
        >>> H(6).exactly_k_times_in_n(outcome=5, n=4, k=2)
        150
        >>> H((2, 3, 3, 4, 4, 5)).exactly_k_times_in_n(outcome=2, n=3, k=3)
        1
        >>> H((2, 3, 3, 4, 4, 5)).exactly_k_times_in_n(outcome=4, n=3, k=3)
        8

        ```
        """
        n = as_int(n)
        k = as_int(k)
        assert k <= n
        c_outcome = self.get(outcome, 0)

        return comb(n, k) * c_outcome ** k * (self.total - c_outcome) ** (n - k)

    @beartype
    def explode(
        self,
        max_depth: SupportsInt = 1,
        precision_limit: Fraction = Fraction(0),
    ) -> H:
        r"""
        Shorthand for ``#!python self.substitute(lambda h, outcome: outcome if len(h) == 1
        else h if outcome == max(h) else outcome, operator.__add__, max_depth,
        precision_limit)``.

        ``` python
        >>> H(6).explode(max_depth=2)
        H({1: 36, 2: 36, 3: 36, 4: 36, 5: 36, 7: 6, 8: 6, 9: 6, 10: 6, 11: 6, 13: 1, 14: 1, 15: 1, 16: 1, 17: 1, 18: 1})

        ```

        See the [``substitute`` method][dyce.h.H.substitute].
        """
        return self.substitute(
            lambda h, outcome: outcome
            if len(h) == 1
            else h
            if outcome == max(h)
            else outcome,
            __add__,
            max_depth,
            precision_limit,
        )

    @beartype
    def lowest_terms(self) -> H:
        r"""
        Computes and returns a histogram whose counts share a greatest common divisor of 1.

        ``` python
        >>> df = H((-1, -1, 0, 0, 1, 1)) ; df
        H({-1: 2, 0: 2, 1: 2})
        >>> df.lowest_terms()
        H({-1: 1, 0: 1, 1: 1})

        ```

        ``` python
        >>> d6avg = H((2, 2, 3, 3, 3, 3, 4, 4, 4, 4, 5, 5)) ; d6avg
        H({2: 2, 3: 4, 4: 4, 5: 2})
        >>> d6avg.lowest_terms()
        H({2: 1, 3: 2, 4: 2, 5: 1})

        ```
        """
        return type(self)(self._lowest_terms())

    @experimental
    @beartype
    def order_stat_for_n_at_pos(self, n: SupportsInt, pos: SupportsInt) -> H:
        r"""
        !!! warning "Experimental"

            This method should be considered experimental and may change or disappear in
            future versions.

        Shorthand for ``#!python self.order_stat_func_for_n(n)(pos)``.
        """
        # TODO(posita): Explore different memoization strategies (e.g., with
        # functools.cache) for this method and H.order_stat_func_for_n
        return self.order_stat_func_for_n(n)(pos)

    @experimental
    @beartype
    def order_stat_func_for_n(self, n: SupportsInt) -> Callable[[SupportsInt], "H"]:
        r"""
        !!! warning "Experimental"

            This method should be considered experimental and may change or disappear in
            future versions.

        Returns a function that takes a single argument (*pos*) and computes the
        probability distribution for each outcome appearing in that position among
        ``#!python n@self``.

        ``` python
        >>> d6avg = H((2, 3, 3, 4, 4, 5))
        >>> order_stat_for_5d6avg = d6avg.order_stat_func_for_n(5)
        >>> order_stat_for_5d6avg(3)  # counts where outcome appears at index 3
        H({2: 26, 3: 1432, 4: 4792, 5: 1526})

        ```

        The results show that, when rolling five six-sided “averaging” dice and sorting
        each roll, there are 26 ways where ``#!python 2`` appears at the fourth (index
        ``#!python 3``) position, 1432 ways where ``#!python 3`` appears at the fourth
        position, etc. This can be verified independently using the computationally
        expensive method of enumerating rolls and counting those that meet the criteria.

        ``` python
        >>> from dyce import P
        >>> p_5d6avg = 5@P(d6avg)
        >>> sum(count for roll, count in p_5d6avg.rolls_with_counts() if roll[3] == 5)
        1526

        ```

        This method exists in addition to the
        [``H.order_stat_for_n_at_pos`` method][dyce.h.H.order_stat_for_n_at_pos] because
        computing the betas for each outcome in *n* is unnecessary for each *pos*. Where
        different *pos* values are needed for the same *n* (e.g., in a loop) and where
        *n* is large, that overhead can be significant. The returned function caches
        those betas for *n* such that repeated querying or results at *pos* can be
        computed much faster.

        ``` python
        --8<-- "docs/assets/perf_order_stat_for_n.txt"
        ```

        <details>
        <summary>Source: <a href="https://github.com/posita/dyce/blob/latest/docs/assets/perf_order_stat_for_n.ipy"><code>perf_order_stat_for_n.ipy</code></a></summary>

        ``` python
        --8<-- "docs/assets/perf_order_stat_for_n.ipy"
        ```
        </details>
        """
        betas_by_outcome: Dict[RealLike, Tuple[H, H]] = {}

        for outcome in self.outcomes():
            betas_by_outcome[outcome] = (
                n @ self.le(outcome),
                n @ self.lt(outcome),
            )

        def _gen_h_items_at_pos(pos: int) -> Iterator[Tuple[RealLike, int]]:
            for outcome, (h_le, h_lt) in betas_by_outcome.items():
                yield (
                    outcome,
                    h_le.gt(pos).get(True, 0) - h_lt.gt(pos).get(True, 0),
                )

        @beartype
        def order_stat_for_n_at_pos(pos: SupportsInt) -> H:
            return type(self)(_gen_h_items_at_pos(as_int(pos)))

        return order_stat_for_n_at_pos

    @beartype
    def substitute(
        self,
        expand: _ExpandT,
        coalesce: _CoalesceT = coalesce_replace,
        max_depth: SupportsInt = 1,
        precision_limit: Fraction = Fraction(0),
    ) -> H:
        r"""
        !!! warning "Experimental"

            The *precision_limit* parameter should be considered experimental and may
            change or disappear in future versions.

        Calls *expand* on each outcome. If *expand* returns a single outcome, it
        replaces the existing outcome. If it returns an [``H`` object][dyce.h.H],
        expansion is performed again (recursively) on that object until *max_depth* or
        *precision_limit* is exhausted. *coalesce* is called on the original outcome and
        the expanded histogram or outcome and the returned histogram is “folded” into
        result. (More on these terms and concepts below.)

        The default behavior for *coalesce* is to replace the outcome with the expanded
        histogram. Returned histograms are always reduced to their lowest terms.

        !!! note "*coalesce* is not called unless *expand* returns a histogram"

            If *expand* returns a single outcome, it *always* replaces the existing
            outcome. This is intentional. To return a single outcome, but trigger
            *coalesce*, characterize that outcome as a single-sided die (e.g.,
            ``#!python H({outcome: 1})``.

        See the [``coalesce_replace``][dyce.h.coalesce_replace] and
        [``lowest_terms``][dyce.h.H.lowest_terms] methods.

        This method can be used to model complex mechanics. The following models
        re-rolling a face of 1 on the first roll:

        ``` python
        >>> def reroll_one(h: H, outcome):
        ...   return h if outcome == 1 else outcome

        >>> H(6).substitute(reroll_one)
        H({1: 1, 2: 7, 3: 7, 4: 7, 5: 7, 6: 7})

        ```

        See the [``explode`` method][dyce.h.H.explode] for a common shorthand for
        “exploding” dice (i.e., where, if the greatest face come up, the die is
        re-rolled and the result is added to a running sum).

        This method uses the [``aggregate_with_counts``][dyce.h.aggregate_with_counts]
        function in its implementation. As such, If *coalesce* returns the empty
        histogram (``H({})``), the corresponding outcome and its counts are omitted from
        the result without substitution or scaling. A silly example is modeling a d5 by
        indefinitely re-rolling a d6 until something other than a 6 comes up.

        ``` python
        >>> H(6).substitute(lambda __, outcome: H({}) if outcome == 6 else outcome)
        H({1: 1, 2: 1, 3: 1, 4: 1, 5: 1})

        ```

        This technique is more useful when modeling re-rolling certain derived
        outcomes, like ties in a contest.

        ``` python
        >>> d6_3, d8_2 = 3@H(6), 2@H(8)
        >>> d6_3.vs(d8_2)
        H({-1: 4553, 0: 1153, 1: 8118})
        >>> d6_3.vs(d8_2).substitute(lambda __, outcome: H({}) if outcome == 0 else outcome)
        H({-1: 4553, 1: 8118})

        ```

        Because it delegates to a callback for refereeing substitution decisions,
        ``#!python substitute`` is quite flexible and well suited to modeling (or at
        least approximating) logical progressions with dependent variables. Consider the
        following mechanic:

          1. Start with a total of zero.
          2. Roll a six-sided die. Add the face to the total. If the face was a six, go
             to step 3. Otherwise stop.
          3. Roll a four-sided die. Add the face to the total. If the face was a four,
             go to step 2. Otherwise stop.

        What is the likelihood of an even final tally? This can be approximated by:

        ``` python
        >>> d4, d6 = H(4), H(6)

        >>> def reroll_greatest_on_d4_d6(h: H, outcome):
        ...   if outcome == max(h):
        ...     if h == d6: return d4
        ...     if h == d4: return d6
        ...   return outcome

        >>> import operator
        >>> h = d6.substitute(reroll_greatest_on_d4_d6, operator.__add__, max_depth=6)
        >>> h_even = h.is_even()
        >>> print(f"{h_even[1] / h_even.total:.3%}")
        39.131%

        ```

        Surprised? Because both six and four are even numbers, the only way we keep
        rolling is if the total is even. You might think this would lead to evens being
        *more* likely. However, we only care about the final tally and the rules direct
        us to re-roll certain evens (nudging us toward an odd number more often than
        not).

        We can also use this method to model expected damage from a single attack in
        d20-like role playing games.

        ``` python
        >>> bonus = 1
        >>> dmg_dice = H(8)
        >>> dmg = dmg_dice + bonus
        >>> crit = dmg + dmg_dice
        >>> target = 15 - bonus
        >>> d20 = H(20)

        >>> def dmg_from_attack_roll(h: H, outcome):
        ...   if outcome == 20:
        ...     return crit
        ...   elif outcome >= target:
        ...     return dmg
        ...   else:
        ...     return 0

        >>> h = d20.substitute(dmg_from_attack_roll)
        >>> print(h.format(scaled=True))
        avg |    2.15
        std |    3.40
        var |   11.55
          0 |  65.00% |##################################################
          2 |   3.75% |##
          3 |   3.83% |##
          4 |   3.91% |###
          5 |   3.98% |###
          6 |   4.06% |###
          7 |   4.14% |###
          8 |   4.22% |###
          9 |   4.30% |###
         10 |   0.62% |
         11 |   0.55% |
         12 |   0.47% |
         13 |   0.39% |
         14 |   0.31% |
         15 |   0.23% |
         16 |   0.16% |
         17 |   0.08% |

        ```

        When *expand* returns an [``H`` object][dyce.h.H], outcomes produced from the
        corresponding *coalesce* are accumulated, but the counts retain their “scale”
        within the context of the expansion. This becomes clearer when there is no
        overlap between the substituted histogram and the other outcomes.

        ``` python
        >>> d6 = H(6)
        >>> d00 = (H(10) - 1) * 10 ; d00
        H({0: 1, 10: 1, 20: 1, 30: 1, 40: 1, 50: 1, 60: 1, 70: 1, 80: 1, 90: 1})
        >>> set(d6) & set(d00) == set()  # no outcomes in common
        True
        >>> d6_d00 = d6.substitute(
        ...   # If a one comes up when rolling the d6,
        ...   # roll a d00 and take that result instead
        ...   lambda h, outcome: d00 if outcome == 1 else outcome
        ... ) ; d6_d00
        H({0: 1,
         2: 10,
         3: 10,
         4: 10,
         5: 10,
         6: 10,
         10: 1,
         20: 1,
         30: 1,
         40: 1,
         50: 1,
         60: 1,
         70: 1,
         80: 1,
         90: 1})

        ```

        Note that the sum of the outcomes’ counts from the d00 make up the same
        proportion as the one’s outcome and count they replaced from the d6.

        ``` python
        >>> from fractions import Fraction
        >>> Fraction(
        ...   sum(count for outcome, count in d6_d00.items() if outcome in d00),
        ...   d6_d00.total,
        ... )
        Fraction(1, 6)
        >>> Fraction(d6[1], d6.total)
        Fraction(1, 6)

        ```

        !!! tip "Precision limits"

            This method will halt recursive substitution on any branch *either* when its
            depth exceeds *max_depth* *or* its “contextual precision” is
            *precision_limit* or less. In either case, substitution is attempted for all
            of the outcomes of a(n expanded) histogram or none of them. The contextual
            precision of a histogram is its proportion to the whole. The contextual
            precision of the original (or top-level) histogram is ``#!python Fraction(1,
            1)``. By setting *precision_limit* to that value, we basically ensure no
            substitution.

            ``` python
            >>> d6.substitute(
            ...   lambda h, outcome: d00 if outcome == 1 else outcome,
            ...   precision_limit=Fraction(1, 1),  # no substitution
            ... ) == d6
            True
            >>> d6.substitute(
            ...   lambda h, outcome: d00 if outcome == 1 else outcome,
            ...   max_depth=0,  # no substitution
            ... ) == d6
            True

            ```

            Let’s make a contrived, but illustrative modification to our d6/d00 example
            from above. If a one comes up when rolling a d6, roll a d00, but re-roll any
            80s.

            ``` python
            >>> d6.substitute(
            ...   lambda h, outcome: d00 if outcome in (1, 80) else outcome,
            ...   max_depth=100,  # <-- we'll never hit this
            ...   # will halt substitution after the original one from the d6
            ...   precision_limit=Fraction(1, 6),
            ... ) == d6_d00
            True
            >>> d6.substitute(
            ...   lambda h, outcome: d00 if outcome in (1, 80) else outcome,
            ...   max_depth=100,
            ...   # will halt substitution after the first 80 substitution
            ...   # after the original one from the d6
            ...   precision_limit=Fraction(1, 6) * Fraction(1, 10),
            ... )
            H({0: 11,
             2: 100,
             ...,
             6: 100,
             10: 11,
             ...,
             70: 11,
             80: 1,
             90: 11})
            >>> d6.substitute(
            ...   lambda h, outcome: d00 if outcome in (1, 80) else outcome,
            ...   max_depth=100,
            ...   # will halt substitution after the second 80 substitution
            ...   # after the original one from the d6
            ...   precision_limit=Fraction(1, 6) * Fraction(1, 10) - Fraction(1, 1000000000),  # <-- juuust under the wire
            ... )
            H({0: 111,
             2: 1000,
             ...,
             6: 1000,
             10: 111,
             ...,
             70: 111,
             80: 1,
             90: 111})

            ```

            The default value for *precision_limit* is zero, which basically means it is
            ignored and recursion is limited solely by *max_depth*. If you want to
            ensure that this method stops delving based *solely* on precision, set
            *max_depth* to ``#!python -1``, which is equivalent to ``#!python
            sys.getrecursionlimit() + 1``[^1]. Be aware that this skews results in favor
            of non-limited branches.

            ``` python
            >>> h = H({1: 1, 2: 2, 3: 3})
            >>> print(h.explode(max_depth=5).format(scaled=True))
            avg |    4.59
            std |    3.96
            var |   15.65
              1 |  16.67% |########################
              2 |  33.33% |#################################################
              4 |   8.33% |############
              5 |  16.67% |########################
              7 |   4.17% |######
              8 |   8.33% |############
             10 |   2.08% |###
             11 |   4.17% |######
             13 |   1.04% |#
             14 |   2.08% |###
             16 |   0.52% |
             17 |   1.04% |#
             18 |   1.56% |##
            >>> print(h.explode(max_depth=-1, precision_limit=Fraction(1, 6 ** 2)).format(scaled=True))
            avg |    4.63
            std |    4.09
            var |   16.72
              1 |  16.67% |########################
              2 |  33.33% |#################################################
              4 |   8.33% |############
              5 |  16.67% |########################
              7 |   4.17% |######
              8 |   8.33% |############
             10 |   2.08% |###
             11 |   4.17% |######
             13 |   1.04% |#
             14 |   2.08% |###
             16 |   0.52% |
             17 |   1.04% |#
             19 |   0.26% |
             20 |   0.52% |
             21 |   0.78% |#

            ```

            Also be aware that without *max_depth* as a safety net, some substitutions
            are guaranteed to result in ``#!python RecursionError``s, even with very
            high *precision_limit*s.

            ``` python
            >>> H(1).substitute(
            ...   lambda h, outcome: H({outcome + 1: 1}),  # expands to a single-sided die
            ...   max_depth=-1,
            ...   precision_limit=Fraction(999999, 1000000),
            ... )
            Traceback (most recent call last):
              ...
            RecursionError: maximum recursion depth exceeded in comparison

            ```

            [``H.explode``][dyce.h.H.explode]’s *expand* implementation guards against
            this by returning ``#!python outcome`` if the passed histogram has only one
            face. Consider a similar approach for your own *expand* implementations if
            outcomes’ contextual probabilities do not asymptotically approach zero.

        [^1]:

            This method will “bottom out” far earlier. As of this writing, the practical
            limit of its implementation (without optimization) is something close to
            $\frac {1} {3} \times \left( limit - depth \right)$, where $limit$ is
            ``#!python sys.getrecursionlimit()`` and $depth$ is ``#!python
            len(inspect.stack(0))``. This also assumes the provided implementations for
            *expand* and *coalesce* don’t contribute significantly to the call stack.
            Setting *max_depth* to ``#!python -1`` or one beyond the absolute limit
            signals that the caller wants it out of the way.
        """
        max_depth = as_int(max_depth)

        if max_depth == -1:
            max_depth = sys.getrecursionlimit() + 1

        if max_depth < 0:
            raise ValueError(
                "max_depth cannot be an arbitrary negative number (use -1 explicitly to indicate no limit)"
            )

        if precision_limit < 0 or precision_limit > 1:
            raise ValueError(
                f"precision_limit ({precision_limit}) must be between zero and one, inclusive"
            )

        def _substitute(
            h: H,
            depth: int = 0,
            contextual_precision: Fraction = Fraction(1),
        ) -> H:
            assert coalesce is not None

            if depth == max_depth or contextual_precision <= precision_limit:
                return h

            def _expand_and_coalesce() -> Iterator[Tuple[Union[H, RealLike], int]]:
                total = h.total

                for outcome, count in h.items():
                    expanded = expand(h, outcome)

                    if isinstance(expanded, H):
                        # Keep expanding deeper, if we can
                        expanded_precision = Fraction(
                            contextual_precision.numerator * count,
                            contextual_precision.denominator * total,
                        )
                        expanded = _substitute(expanded, depth + 1, expanded_precision)
                        # Coalesce the result
                        expanded = coalesce(expanded, outcome)

                    yield expanded, count

            return aggregate_with_counts(_expand_and_coalesce(), type(self))

        return _substitute(self).lowest_terms()

    @beartype
    def vs(self, other: _OperandT) -> H:
        r"""
        Compares the histogram with *other*. -1 represents where *other* is greater. 0
        represents where they are equal. 1 represents where *other* is less.

        Shorthand for ``#!python self.within(0, 0, other)``.

        ``` python
        >>> H(6).vs(H(4))
        H({-1: 6, 0: 4, 1: 14})
        >>> H(6).vs(H(4)) == H(6).within(0, 0, H(4))
        True

        ```

        See the [``within`` method][dyce.h.H.within].
        """
        return self.within(0, 0, other)

    @beartype
    def within(self, lo: RealLike, hi: RealLike, other: _OperandT = 0) -> H:
        r"""
        Computes the difference between the histogram and *other*. -1 represents where that
        difference is less than *lo*. 0 represents where that difference between *lo*
        and *hi* (inclusive). 1 represents where that difference is greater than *hi*.

        ``` python
        >>> d6_2 = 2@H(6)
        >>> d6_2.within(7, 9)
        H({-1: 15, 0: 15, 1: 6})
        >>> print(d6_2.within(7, 9).format())
        avg |   -0.25
        std |    0.72
        var |    0.52
         -1 |  41.67% |####################
          0 |  41.67% |####################
          1 |  16.67% |########

        ```

        ``` python
        >>> d6_3, d8_2 = 3@H(6), 2@H(8)
        >>> d6_3.within(-1, 1, d8_2)  # 3d6 w/in 1 of 2d8
        H({-1: 3500, 0: 3412, 1: 6912})
        >>> print(d6_3.within(-1, 1, d8_2).format())
        avg |    0.25
        std |    0.83
        var |    0.69
         -1 |  25.32% |############
          0 |  24.68% |############
          1 |  50.00% |#########################

        ```
        """
        return self.map(_within(lo, hi), other)

    @overload
    def distribution(
        self,
        fill_items: Optional[_MappingT] = None,
    ) -> Iterator[Tuple[RealLike, Fraction]]:
        ...

    @overload
    def distribution(
        self,
        fill_items: _MappingT,
        rational_t: _RationalInitializerT[_T],
    ) -> Iterator[Tuple[RealLike, _T]]:
        ...

    @overload
    def distribution(
        self,
        *,
        rational_t: _RationalInitializerT[_T],
    ) -> Iterator[Tuple[RealLike, _T]]:
        ...

    @experimental
    @beartype
    def distribution(
        self,
        fill_items: Optional[_MappingT] = None,
        # TODO(posita): See <https://github.com/python/mypy/issues/10854> for context on
        # all the @overload work-around nonsense above and remove those once that issue
        # is addressed.
        rational_t: _RationalInitializerT[_T] = cast(_RationalInitializerT, Fraction),
    ) -> Iterator[Tuple[RealLike, _T]]:
        r"""
        Presentation helper function returning an iterator for each outcome/count or
        outcome/probability pair.

        ``` python
        >>> h = H((1, 2, 3, 3, 4, 4, 5, 6))
        >>> list(h.distribution())
        [(1, Fraction(1, 8)), (2, Fraction(1, 8)), (3, Fraction(1, 4)), (4, Fraction(1, 4)), (5, Fraction(1, 8)), (6, Fraction(1, 8))]
        >>> list(h.ge(3).distribution())
        [(False, Fraction(1, 4)), (True, Fraction(3, 4))]

        ```

        If provided, *fill_items* supplies defaults for any “missing” outcomes.

        ``` python
        >>> list(h.distribution())
        [(1, Fraction(1, 8)), (2, Fraction(1, 8)), (3, Fraction(1, 4)), (4, Fraction(1, 4)), (5, Fraction(1, 8)), (6, Fraction(1, 8))]
        >>> list(h.distribution(fill_items={0: 0, 7: 0}))
        [(0, Fraction(0, 1)), (1, Fraction(1, 8)), (2, Fraction(1, 8)), (3, Fraction(1, 4)), (4, Fraction(1, 4)), (5, Fraction(1, 8)), (6, Fraction(1, 8)), (7, Fraction(0, 1))]

        ```

        !!! warning "Experimental"

            The *rational_t* argument to this method should be considered experimental
            and may change or disappear in future versions.

        If provided, *rational_t* must be a callable that takes two ``#!python int``s (a
        numerator and denominator) and returns an instance of a desired (but otherwise
        arbitrary) type.

        ``` python
        >>> list(h.distribution(rational_t=lambda n, d: f"{n}/{d}"))
        [(1, '1/8'), (2, '1/8'), (3, '2/8'), (4, '2/8'), (5, '1/8'), (6, '1/8')]

        ```

        ``` python
        >>> import sympy
        >>> list(h.distribution(rational_t=sympy.Rational))
        [(1, 1/8), (2, 1/8), (3, 1/4), (4, 1/4), (5, 1/8), (6, 1/8)]

        ```

        ``` python
        >>> import sage.rings.rational  # doctest: +SKIP
        >>> list(h.distribution(rational_t=lambda n, d: sage.rings.rational.Rational((n, d))))  # doctest: +SKIP
        [(1, 1/8), (2, 1/8), (3, 1/4), (4, 1/4), (5, 1/8), (6, 1/8)]

        ```

        !!! note

            The arguments passed to *rational_t* are not reduced to the lowest terms.

        The *rational_t* argument is a convenience. Iteration or comprehension can be
        used to accomplish something similar.

        ``` python
        >>> [(outcome, f"{probability.numerator}/{probability.denominator}") for outcome, probability in (h).distribution()]
        [(1, '1/8'), (2, '1/8'), (3, '1/4'), (4, '1/4'), (5, '1/8'), (6, '1/8')]

        ```

        Many number implementations can convert directly from ``#!python
        fractions.Fraction``s.

        ``` python
        >>> import sympy.abc
        >>> [(outcome, sympy.Rational(probability)) for outcome, probability in (h + sympy.abc.x).distribution()]
        [(x + 1, 1/8), (x + 2, 1/8), (x + 3, 1/4), (x + 4, 1/4), (x + 5, 1/8), (x + 6, 1/8)]

        ```

        ``` python
        >>> import sage.rings.rational  # doctest: +SKIP
        >>> [(outcome, sage.rings.rational.Rational(probability)) for outcome, probability in h.distribution()]  # doctest: +SKIP
        [(1, 1/6), (2, 1/6), (3, 1/3), (4, 1/3), (5, 1/6), (6, 1/6)]

        ```
        """
        if fill_items is None:
            fill_items = {}

        combined = dict(chain(fill_items.items(), self.items()))
        total = sum(combined.values()) or 1

        return (
            (outcome, rational_t(combined[outcome], total))
            for outcome in sorted_outcomes(combined)
        )

    @beartype
    def distribution_xy(
        self,
        fill_items: Optional[_MappingT] = None,
    ) -> Tuple[Tuple[RealLike, ...], Tuple[float, ...]]:
        r"""
        Presentation helper function returning an iterator for a “zipped” arrangement of the
        output from the [``distribution`` method][dyce.h.H.distribution] and ensures the
        values are ``#!python float``s.

        ``` python
        >>> list(H(6).distribution())
        [(1, Fraction(1, 6)), (2, Fraction(1, 6)), (3, Fraction(1, 6)), (4, Fraction(1, 6)), (5, Fraction(1, 6)), (6, Fraction(1, 6))]
        >>> H(6).distribution_xy()
        ((1, 2, 3, 4, 5, 6), (0.16666666, 0.16666666, 0.16666666, 0.16666666, 0.16666666, 0.16666666))

        ```
        """
        # TODO(posita): See <https://github.com/python/typing/issues/193>
        return tuple(  # type: ignore [return-value]
            zip(
                *(
                    (outcome, float(probability))
                    for outcome, probability in self.distribution(fill_items)
                )
            )
        )

    @beartype
    def format(
        self,
        fill_items: Optional[_MappingT] = None,
        width: SupportsInt = _ROW_WIDTH,
        scaled: bool = False,
        tick: str = "#",
        sep: str = os.linesep,
    ) -> str:
        r"""
        Returns a formatted string representation of the histogram. If provided,
        *fill_items* supplies defaults for any missing outcomes. If *width* is greater
        than zero, a horizontal bar ASCII graph is printed using *tick* and *sep* (which
        are otherwise ignored if *width* is zero or less).

        ``` python
        >>> print(H(6).format(width=0))
        {avg: 3.50, 1: 16.67%, 2: 16.67%, 3: 16.67%, 4: 16.67%, 5: 16.67%, 6: 16.67%}

        ```

        ``` python
        >>> print((2@H(6)).format(fill_items={i: 0 for i in range(1, 21)}, tick="@"))
        avg |    7.00
        std |    2.42
        var |    5.83
          1 |   0.00% |
          2 |   2.78% |@
          3 |   5.56% |@@
          4 |   8.33% |@@@@
          5 |  11.11% |@@@@@
          6 |  13.89% |@@@@@@
          7 |  16.67% |@@@@@@@@
          8 |  13.89% |@@@@@@
          9 |  11.11% |@@@@@
         10 |   8.33% |@@@@
         11 |   5.56% |@@
         12 |   2.78% |@
         13 |   0.00% |
         14 |   0.00% |
         15 |   0.00% |
         16 |   0.00% |
         17 |   0.00% |
         18 |   0.00% |
         19 |   0.00% |
         20 |   0.00% |

        ```

        If *scaled* is ``#!python True``, horizontal bars are scaled to *width*.

        ``` python
        >>> h = (2@H(6)).ge(7)
        >>> print(f"{' 65 chars wide -->|':->65}")
        ---------------------------------------------- 65 chars wide -->|
        >>> print(h.format(scaled=False))
        avg |    0.58
        std |    0.49
        var |    0.24
          0 |  41.67% |####################
          1 |  58.33% |#############################
        >>> print(h.format(scaled=True))
        avg |    0.58
        std |    0.49
        var |    0.24
          0 |  41.67% |###################################
          1 |  58.33% |##################################################

        ```
        """
        width = as_int(width)

        # We convert various values herein to native ints and floats because number
        # tower implementations sometimes neglect to implement __format__ properly (or
        # at all). (I'm looking at you, sage.rings.…!)
        try:
            mu: RealLike = float(self.mean())
        except (OverflowError, TypeError):
            mu = self.mean()

        if width <= 0:

            def _parts() -> Iterator[str]:
                yield f"avg: {mu:.2f}"

                for (
                    outcome,
                    probability,
                ) in self.distribution(fill_items):
                    probability_f = float(probability)
                    yield f"{outcome}:{probability_f:7.2%}"

            return "{" + ", ".join(_parts()) + "}"
        else:
            w = width - 15

            def _lines() -> Iterator[str]:
                try:
                    yield f"avg | {mu:7.2f}"
                    std = float(self.stdev(mu))
                    var = float(self.variance(mu))
                    yield f"std | {std:7.2f}"
                    yield f"var | {var:7.2f}"
                except (OverflowError, TypeError) as exc:
                    warnings.warn(f"{str(exc)}; mu: {mu}")

                if self:
                    outcomes, probabilities = self.distribution_xy(fill_items)
                    tick_scale = max(probabilities) if scaled else 1.0

                    for outcome, probability in zip(outcomes, probabilities):
                        try:
                            outcome_str = f"{outcome: 3}"
                        except (TypeError, ValueError):
                            outcome_str = str(outcome)
                            outcome_str = f"{outcome_str: >3}"

                        ticks = tick * int(w * probability / tick_scale)
                        probability_f = float(probability)
                        yield f"{outcome_str} | {probability_f:7.2%} |{ticks}"

            return sep.join(_lines())

    @beartype
    def mean(self) -> RealLike:
        r"""
        Returns the mean of the weighted outcomes (or 0.0 if there are no outcomes).
        """
        numerator: float
        denominator: float
        numerator = denominator = 0

        for outcome, count in self.items():
            numerator += outcome * count
            denominator += count

        return numerator / (denominator or 1)

    @beartype
    def stdev(self, mu: Optional[RealLike] = None) -> RealLike:
        r"""
        Shorthand for ``#!python math.sqrt(self.variance(mu))``.
        """
        return sqrt(self.variance(mu))

    @beartype
    def variance(self, mu: Optional[RealLike] = None) -> RealLike:
        r"""
        Returns the variance of the weighted outcomes. If provided, *mu* is used as the mean
        (to avoid duplicate computation).
        """
        mu = mu if mu else self.mean()
        numerator: float
        denominator: float
        numerator = denominator = 0

        for outcome, count in self.items():
            numerator += (outcome - mu) ** 2 * count
            denominator += count

        return numerator / (denominator or 1)

    @beartype
    def roll(self) -> RealLike:
        r"""
        Returns a (weighted) random outcome, sorted.
        """
        return (
            rng.RNG.choices(
                population=tuple(self.outcomes()),
                weights=tuple(self.counts()),
                k=1,
            )[0]
            if self
            else 0
        )

    def _lowest_terms(self) -> Iterable[Tuple[RealLike, int]]:
        counts_gcd = gcd(*self.counts())

        return ((k, v // counts_gcd) for k, v in self.items())

__slots__: Union[str, Iterable[str]] special

total: int property readonly

Experimental

This propertyshould be considered experimental and may change or disappear in future versions.

Equivalent to sum(self.counts()).

__abs__(self) -> H special

Source code in dyce/h.py
@beartype
def __abs__(self) -> H:
    return self.umap(__abs__)

__add__(self, other: _OperandT) -> H special

Source code in dyce/h.py
@beartype
def __add__(self, other: _OperandT) -> H:
    try:
        return self.map(__add__, other)
    except NotImplementedError:
        return NotImplemented

__and__(self, other: Union[SupportsInt, 'H', 'HableT']) -> H special

Source code in dyce/h.py
@beartype
def __and__(self, other: Union[SupportsInt, "H", "HableT"]) -> H:
    try:
        if isinstance(other, SupportsInt):
            other = as_int(other)

        return self.map(__and__, other)
    except (NotImplementedError, TypeError):
        return NotImplemented

__eq__(self, other) -> bool special

Source code in dyce/h.py
@beartype
def __eq__(self, other) -> bool:
    if isinstance(other, HableT):
        return __eq__(self, other.h())
    elif isinstance(other, H):
        return __eq__(self.lowest_terms()._h, other.lowest_terms()._h)
    else:
        return super().__eq__(other)

__floordiv__(self, other: _OperandT) -> H special

Source code in dyce/h.py
@beartype
def __floordiv__(self, other: _OperandT) -> H:
    try:
        return self.map(__floordiv__, other)
    except NotImplementedError:
        return NotImplemented

__getitem__(self, key: RealLike) -> int special

Source code in dyce/h.py
@beartype
def __getitem__(self, key: RealLike) -> int:
    return __getitem__(self._h, key)

__hash__(self) -> int special

Return hash(self).

Source code in dyce/h.py
@beartype
def __hash__(self) -> int:
    return hash(frozenset(self._lowest_terms()))

__init__(self, items: _SourceT) -> None special

Initializer.

Source code in dyce/h.py
@beartype
def __init__(self, items: _SourceT) -> None:
    r"Initializer."
    super().__init__()
    self._simple_init: Optional[int] = None
    tmp: Counter[RealLike] = counter()

    if isinstance(items, MappingC):
        items = items.items()

    if isinstance(items, SupportsInt):
        if items != 0:
            self._simple_init = as_int(items)
            outcome_range = range(
                self._simple_init,
                0,
                1 if self._simple_init < 0 else -1,  # count toward zero
            )

            if isinstance(items, RealLike):
                outcome_type = type(items)
                tmp.update({outcome_type(i): 1 for i in outcome_range})
            else:
                tmp.update({i: 1 for i in outcome_range})
    elif isinstance(items, HableT):
        tmp.update(items.h())
    elif isinstance(items, IterableC):
        # items is either an Iterable[RealLike] or an Iterable[Tuple[RealLike,
        # SupportsInt]] (although this technically supports Iterable[Union[RealLike,
        # Tuple[RealLike, SupportsInt]]])
        for item in items:
            if isinstance(item, tuple):
                outcome, count = item
                tmp[outcome] += as_int(count)
            else:
                tmp[item] += 1
    else:
        raise ValueError(f"unrecognized initializer {items}")

    # Sort and omit zero counts. As of Python 3.7, insertion order of keys is
    # preserved.
    self._h: _MappingT = {
        outcome: tmp[outcome]
        for outcome in sorted_outcomes(tmp)
        if tmp[outcome] != 0
    }

__invert__(self) -> H special

Source code in dyce/h.py
@beartype
def __invert__(self) -> H:
    return self.umap(__invert__)

__iter__(self) -> Iterator[RealLike] special

Source code in dyce/h.py
@beartype
def __iter__(self) -> Iterator[RealLike]:
    return iter(self._h)

__len__(self) -> int special

Source code in dyce/h.py
@beartype
def __len__(self) -> int:
    return len(self._h)

__matmul__(self, other: SupportsInt) -> H special

Source code in dyce/h.py
@beartype
def __matmul__(self, other: SupportsInt) -> H:
    try:
        other = as_int(other)
    except TypeError:
        return NotImplemented

    if other < 0:
        raise ValueError("argument cannot be negative")
    else:
        return sum_h(repeat(self, other))

__mod__(self, other: _OperandT) -> H special

Source code in dyce/h.py
@beartype
def __mod__(self, other: _OperandT) -> H:
    try:
        return self.map(__mod__, other)
    except NotImplementedError:
        return NotImplemented

__mul__(self, other: _OperandT) -> H special

Source code in dyce/h.py
@beartype
def __mul__(self, other: _OperandT) -> H:
    try:
        return self.map(__mul__, other)
    except NotImplementedError:
        return NotImplemented

__ne__(self, other) -> bool special

Source code in dyce/h.py
@beartype
def __ne__(self, other) -> bool:
    if isinstance(other, HableT):
        return __ne__(self, other.h())
    elif isinstance(other, H):
        return not __eq__(self, other)
    else:
        return super().__ne__(other)

__neg__(self) -> H special

Source code in dyce/h.py
@beartype
def __neg__(self) -> H:
    return self.umap(__neg__)

__or__(self, other: Union[SupportsInt, 'H', 'HableT']) -> H special

Source code in dyce/h.py
@beartype
def __or__(self, other: Union[SupportsInt, "H", "HableT"]) -> H:
    try:
        if isinstance(other, SupportsInt):
            other = as_int(other)

        return self.map(__or__, other)
    except (NotImplementedError, TypeError):
        return NotImplemented

__pos__(self) -> H special

Source code in dyce/h.py
@beartype
def __pos__(self) -> H:
    return self.umap(__pos__)

__pow__(self, other: _OperandT) -> H special

Source code in dyce/h.py
@beartype
def __pow__(self, other: _OperandT) -> H:
    try:
        return self.map(__pow__, other)
    except NotImplementedError:
        return NotImplemented

__radd__(self, other: RealLike) -> H special

Source code in dyce/h.py
@beartype
def __radd__(self, other: RealLike) -> H:
    try:
        return self.rmap(other, __add__)
    except NotImplementedError:
        return NotImplemented

__rand__(self, other: SupportsInt) -> H special

Source code in dyce/h.py
@beartype
def __rand__(self, other: SupportsInt) -> H:
    try:
        return self.rmap(as_int(other), __and__)
    except (NotImplementedError, TypeError):
        return NotImplemented

__repr__(self) -> str special

Source code in dyce/h.py
@beartype
def __repr__(self) -> str:
    if self._simple_init is not None:
        arg = str(self._simple_init)
    elif sys.version_info >= (3, 8):
        arg = pformat(self._h, sort_dicts=False)
    else:
        arg = dict.__repr__(self._h)

    return f"{type(self).__name__}({arg})"

__rfloordiv__(self, other: RealLike) -> H special

Source code in dyce/h.py
@beartype
def __rfloordiv__(self, other: RealLike) -> H:
    try:
        return self.rmap(other, __floordiv__)
    except NotImplementedError:
        return NotImplemented

__rmatmul__(self, other: SupportsInt) -> H special

Source code in dyce/h.py
@beartype
def __rmatmul__(self, other: SupportsInt) -> H:
    return self.__matmul__(other)

__rmod__(self, other: RealLike) -> H special

Source code in dyce/h.py
@beartype
def __rmod__(self, other: RealLike) -> H:
    try:
        return self.rmap(other, __mod__)
    except NotImplementedError:
        return NotImplemented

__rmul__(self, other: RealLike) -> H special

Source code in dyce/h.py
@beartype
def __rmul__(self, other: RealLike) -> H:
    try:
        return self.rmap(other, __mul__)
    except NotImplementedError:
        return NotImplemented

__ror__(self, other: SupportsInt) -> H special

Source code in dyce/h.py
@beartype
def __ror__(self, other: SupportsInt) -> H:
    try:
        return self.rmap(as_int(other), __or__)
    except (NotImplementedError, TypeError):
        return NotImplemented

__rpow__(self, other: RealLike) -> H special

Source code in dyce/h.py
@beartype
def __rpow__(self, other: RealLike) -> H:
    try:
        return self.rmap(other, __pow__)
    except NotImplementedError:
        return NotImplemented

__rsub__(self, other: RealLike) -> H special

Source code in dyce/h.py
@beartype
def __rsub__(self, other: RealLike) -> H:
    try:
        return self.rmap(other, __sub__)
    except NotImplementedError:
        return NotImplemented

__rtruediv__(self, other: RealLike) -> H special

Source code in dyce/h.py
@beartype
def __rtruediv__(self, other: RealLike) -> H:
    try:
        return self.rmap(other, __truediv__)
    except NotImplementedError:
        return NotImplemented

__rxor__(self, other: SupportsInt) -> H special

Source code in dyce/h.py
@beartype
def __rxor__(self, other: SupportsInt) -> H:
    try:
        return self.rmap(as_int(other), __xor__)
    except (NotImplementedError, TypeError):
        return NotImplemented

__sub__(self, other: _OperandT) -> H special

Source code in dyce/h.py
@beartype
def __sub__(self, other: _OperandT) -> H:
    try:
        return self.map(__sub__, other)
    except NotImplementedError:
        return NotImplemented

__truediv__(self, other: _OperandT) -> H special

Source code in dyce/h.py
@beartype
def __truediv__(self, other: _OperandT) -> H:
    try:
        return self.map(__truediv__, other)
    except NotImplementedError:
        return NotImplemented

__xor__(self, other: Union[SupportsInt, 'H', 'HableT']) -> H special

Source code in dyce/h.py
@beartype
def __xor__(self, other: Union[SupportsInt, "H", "HableT"]) -> H:
    try:
        if isinstance(other, SupportsInt):
            other = as_int(other)

        return self.map(__xor__, other)
    except NotImplementedError:
        return NotImplemented

accumulate(self, other: _SourceT) -> H

Accumulates counts.

1
2
>>> H(4).accumulate(H(6))
H({1: 2, 2: 2, 3: 2, 4: 2, 5: 1, 6: 1})
Source code in dyce/h.py
@beartype
def accumulate(self, other: _SourceT) -> H:
    r"""
    Accumulates counts.

    ``` python
    >>> H(4).accumulate(H(6))
    H({1: 2, 2: 2, 3: 2, 4: 2, 5: 1, 6: 1})

    ```
    """
    if isinstance(other, MappingC):
        other = other.items()
    elif not isinstance(other, IterableC):
        other = cast(Iterable[RealLike], (other,))

    return type(self)(chain(self.items(), cast(Iterable, other)))

counts(self) -> ValuesView[int]

More descriptive synonym for the values method.

Source code in dyce/h.py
@beartype
def counts(self) -> ValuesView[int]:
    r"""
    More descriptive synonym for the [``values`` method][dyce.h.H.values].
    """
    return self._h.values()

distribution(self, fill_items: Optional[_MappingT] = None, rational_t: _RationalInitializerT[_T] = <class 'fractions.Fraction'>) -> Iterator[Tuple[RealLike, _T]]

Presentation helper function returning an iterator for each outcome/count or outcome/probability pair.

1
2
3
4
5
>>> h = H((1, 2, 3, 3, 4, 4, 5, 6))
>>> list(h.distribution())
[(1, Fraction(1, 8)), (2, Fraction(1, 8)), (3, Fraction(1, 4)), (4, Fraction(1, 4)), (5, Fraction(1, 8)), (6, Fraction(1, 8))]
>>> list(h.ge(3).distribution())
[(False, Fraction(1, 4)), (True, Fraction(3, 4))]

If provided, fill_items supplies defaults for any “missing” outcomes.

1
2
3
4
>>> list(h.distribution())
[(1, Fraction(1, 8)), (2, Fraction(1, 8)), (3, Fraction(1, 4)), (4, Fraction(1, 4)), (5, Fraction(1, 8)), (6, Fraction(1, 8))]
>>> list(h.distribution(fill_items={0: 0, 7: 0}))
[(0, Fraction(0, 1)), (1, Fraction(1, 8)), (2, Fraction(1, 8)), (3, Fraction(1, 4)), (4, Fraction(1, 4)), (5, Fraction(1, 8)), (6, Fraction(1, 8)), (7, Fraction(0, 1))]

Experimental

The rational_t argument to this method should be considered experimental and may change or disappear in future versions.

If provided, rational_t must be a callable that takes two ints (a numerator and denominator) and returns an instance of a desired (but otherwise arbitrary) type.

1
2
>>> list(h.distribution(rational_t=lambda n, d: f"{n}/{d}"))
[(1, '1/8'), (2, '1/8'), (3, '2/8'), (4, '2/8'), (5, '1/8'), (6, '1/8')]
1
2
3
>>> import sympy
>>> list(h.distribution(rational_t=sympy.Rational))
[(1, 1/8), (2, 1/8), (3, 1/4), (4, 1/4), (5, 1/8), (6, 1/8)]
1
2
3
>>> import sage.rings.rational  # doctest: +SKIP
>>> list(h.distribution(rational_t=lambda n, d: sage.rings.rational.Rational((n, d))))  # doctest: +SKIP
[(1, 1/8), (2, 1/8), (3, 1/4), (4, 1/4), (5, 1/8), (6, 1/8)]

Note

The arguments passed to rational_t are not reduced to the lowest terms.

The rational_t argument is a convenience. Iteration or comprehension can be used to accomplish something similar.

1
2
>>> [(outcome, f"{probability.numerator}/{probability.denominator}") for outcome, probability in (h).distribution()]
[(1, '1/8'), (2, '1/8'), (3, '1/4'), (4, '1/4'), (5, '1/8'), (6, '1/8')]

Many number implementations can convert directly from fractions.Fractions.

1
2
3
>>> import sympy.abc
>>> [(outcome, sympy.Rational(probability)) for outcome, probability in (h + sympy.abc.x).distribution()]
[(x + 1, 1/8), (x + 2, 1/8), (x + 3, 1/4), (x + 4, 1/4), (x + 5, 1/8), (x + 6, 1/8)]
1
2
3
>>> import sage.rings.rational  # doctest: +SKIP
>>> [(outcome, sage.rings.rational.Rational(probability)) for outcome, probability in h.distribution()]  # doctest: +SKIP
[(1, 1/6), (2, 1/6), (3, 1/3), (4, 1/3), (5, 1/6), (6, 1/6)]
Source code in dyce/h.py
@experimental
@beartype
def distribution(
    self,
    fill_items: Optional[_MappingT] = None,
    # TODO(posita): See <https://github.com/python/mypy/issues/10854> for context on
    # all the @overload work-around nonsense above and remove those once that issue
    # is addressed.
    rational_t: _RationalInitializerT[_T] = cast(_RationalInitializerT, Fraction),
) -> Iterator[Tuple[RealLike, _T]]:
    r"""
    Presentation helper function returning an iterator for each outcome/count or
    outcome/probability pair.

    ``` python
    >>> h = H((1, 2, 3, 3, 4, 4, 5, 6))
    >>> list(h.distribution())
    [(1, Fraction(1, 8)), (2, Fraction(1, 8)), (3, Fraction(1, 4)), (4, Fraction(1, 4)), (5, Fraction(1, 8)), (6, Fraction(1, 8))]
    >>> list(h.ge(3).distribution())
    [(False, Fraction(1, 4)), (True, Fraction(3, 4))]

    ```

    If provided, *fill_items* supplies defaults for any “missing” outcomes.

    ``` python
    >>> list(h.distribution())
    [(1, Fraction(1, 8)), (2, Fraction(1, 8)), (3, Fraction(1, 4)), (4, Fraction(1, 4)), (5, Fraction(1, 8)), (6, Fraction(1, 8))]
    >>> list(h.distribution(fill_items={0: 0, 7: 0}))
    [(0, Fraction(0, 1)), (1, Fraction(1, 8)), (2, Fraction(1, 8)), (3, Fraction(1, 4)), (4, Fraction(1, 4)), (5, Fraction(1, 8)), (6, Fraction(1, 8)), (7, Fraction(0, 1))]

    ```

    !!! warning "Experimental"

        The *rational_t* argument to this method should be considered experimental
        and may change or disappear in future versions.

    If provided, *rational_t* must be a callable that takes two ``#!python int``s (a
    numerator and denominator) and returns an instance of a desired (but otherwise
    arbitrary) type.

    ``` python
    >>> list(h.distribution(rational_t=lambda n, d: f"{n}/{d}"))
    [(1, '1/8'), (2, '1/8'), (3, '2/8'), (4, '2/8'), (5, '1/8'), (6, '1/8')]

    ```

    ``` python
    >>> import sympy
    >>> list(h.distribution(rational_t=sympy.Rational))
    [(1, 1/8), (2, 1/8), (3, 1/4), (4, 1/4), (5, 1/8), (6, 1/8)]

    ```

    ``` python
    >>> import sage.rings.rational  # doctest: +SKIP
    >>> list(h.distribution(rational_t=lambda n, d: sage.rings.rational.Rational((n, d))))  # doctest: +SKIP
    [(1, 1/8), (2, 1/8), (3, 1/4), (4, 1/4), (5, 1/8), (6, 1/8)]

    ```

    !!! note

        The arguments passed to *rational_t* are not reduced to the lowest terms.

    The *rational_t* argument is a convenience. Iteration or comprehension can be
    used to accomplish something similar.

    ``` python
    >>> [(outcome, f"{probability.numerator}/{probability.denominator}") for outcome, probability in (h).distribution()]
    [(1, '1/8'), (2, '1/8'), (3, '1/4'), (4, '1/4'), (5, '1/8'), (6, '1/8')]

    ```

    Many number implementations can convert directly from ``#!python
    fractions.Fraction``s.

    ``` python
    >>> import sympy.abc
    >>> [(outcome, sympy.Rational(probability)) for outcome, probability in (h + sympy.abc.x).distribution()]
    [(x + 1, 1/8), (x + 2, 1/8), (x + 3, 1/4), (x + 4, 1/4), (x + 5, 1/8), (x + 6, 1/8)]

    ```

    ``` python
    >>> import sage.rings.rational  # doctest: +SKIP
    >>> [(outcome, sage.rings.rational.Rational(probability)) for outcome, probability in h.distribution()]  # doctest: +SKIP
    [(1, 1/6), (2, 1/6), (3, 1/3), (4, 1/3), (5, 1/6), (6, 1/6)]

    ```
    """
    if fill_items is None:
        fill_items = {}

    combined = dict(chain(fill_items.items(), self.items()))
    total = sum(combined.values()) or 1

    return (
        (outcome, rational_t(combined[outcome], total))
        for outcome in sorted_outcomes(combined)
    )

distribution_xy(self, fill_items: Optional[_MappingT] = None) -> Tuple[Tuple[RealLike, ...], Tuple[float, ...]]

Presentation helper function returning an iterator for a “zipped” arrangement of the output from the distribution method and ensures the values are floats.

1
2
3
4
>>> list(H(6).distribution())
[(1, Fraction(1, 6)), (2, Fraction(1, 6)), (3, Fraction(1, 6)), (4, Fraction(1, 6)), (5, Fraction(1, 6)), (6, Fraction(1, 6))]
>>> H(6).distribution_xy()
((1, 2, 3, 4, 5, 6), (0.16666666, 0.16666666, 0.16666666, 0.16666666, 0.16666666, 0.16666666))
Source code in dyce/h.py
@beartype
def distribution_xy(
    self,
    fill_items: Optional[_MappingT] = None,
) -> Tuple[Tuple[RealLike, ...], Tuple[float, ...]]:
    r"""
    Presentation helper function returning an iterator for a “zipped” arrangement of the
    output from the [``distribution`` method][dyce.h.H.distribution] and ensures the
    values are ``#!python float``s.

    ``` python
    >>> list(H(6).distribution())
    [(1, Fraction(1, 6)), (2, Fraction(1, 6)), (3, Fraction(1, 6)), (4, Fraction(1, 6)), (5, Fraction(1, 6)), (6, Fraction(1, 6))]
    >>> H(6).distribution_xy()
    ((1, 2, 3, 4, 5, 6), (0.16666666, 0.16666666, 0.16666666, 0.16666666, 0.16666666, 0.16666666))

    ```
    """
    # TODO(posita): See <https://github.com/python/typing/issues/193>
    return tuple(  # type: ignore [return-value]
        zip(
            *(
                (outcome, float(probability))
                for outcome, probability in self.distribution(fill_items)
            )
        )
    )

eq(self, other: _OperandT) -> H

Shorthand for self.map(operator.__eq__, other).umap(bool).

1
2
>>> H(6).eq(3)
H({False: 5, True: 1})

See the map and umap methods.

Source code in dyce/h.py
@beartype
def eq(self, other: _OperandT) -> H:
    r"""
    Shorthand for ``#!python self.map(operator.__eq__, other).umap(bool)``.

    ``` python
    >>> H(6).eq(3)
    H({False: 5, True: 1})

    ```

    See the [``map``][dyce.h.H.map] and [``umap``][dyce.h.H.umap] methods.
    """
    return self.map(__eq__, other).umap(bool)

exactly_k_times_in_n(self, outcome: RealLike, n: SupportsInt, k: SupportsInt) -> int

Experimental

This method should be considered experimental and may change or disappear in future versions.

Computes and returns the probability distribution where outcome appears exactly k times among n@self.

1
2
3
4
5
6
>>> H(6).exactly_k_times_in_n(outcome=5, n=4, k=2)
150
>>> H((2, 3, 3, 4, 4, 5)).exactly_k_times_in_n(outcome=2, n=3, k=3)
1
>>> H((2, 3, 3, 4, 4, 5)).exactly_k_times_in_n(outcome=4, n=3, k=3)
8
Source code in dyce/h.py
@experimental
@beartype
def exactly_k_times_in_n(
    self,
    outcome: RealLike,
    n: SupportsInt,
    k: SupportsInt,
) -> int:
    r"""
    !!! warning "Experimental"

        This method should be considered experimental and may change or disappear in
        future versions.

    Computes and returns the probability distribution where *outcome* appears
    exactly *k* times among ``#!python n@self``.

    ``` python
    >>> H(6).exactly_k_times_in_n(outcome=5, n=4, k=2)
    150
    >>> H((2, 3, 3, 4, 4, 5)).exactly_k_times_in_n(outcome=2, n=3, k=3)
    1
    >>> H((2, 3, 3, 4, 4, 5)).exactly_k_times_in_n(outcome=4, n=3, k=3)
    8

    ```
    """
    n = as_int(n)
    k = as_int(k)
    assert k <= n
    c_outcome = self.get(outcome, 0)

    return comb(n, k) * c_outcome ** k * (self.total - c_outcome) ** (n - k)

explode(self, max_depth: SupportsInt = 1, precision_limit: Fraction = Fraction(0, 1)) -> H

Shorthand for self.substitute(lambda h, outcome: outcome if len(h) == 1else h if outcome == max(h) else outcome, operator.__add__, max_depth,precision_limit).

1
2
>>> H(6).explode(max_depth=2)
H({1: 36, 2: 36, 3: 36, 4: 36, 5: 36, 7: 6, 8: 6, 9: 6, 10: 6, 11: 6, 13: 1, 14: 1, 15: 1, 16: 1, 17: 1, 18: 1})

See the substitute method.

Source code in dyce/h.py
@beartype
def explode(
    self,
    max_depth: SupportsInt = 1,
    precision_limit: Fraction = Fraction(0),
) -> H:
    r"""
    Shorthand for ``#!python self.substitute(lambda h, outcome: outcome if len(h) == 1
    else h if outcome == max(h) else outcome, operator.__add__, max_depth,
    precision_limit)``.

    ``` python
    >>> H(6).explode(max_depth=2)
    H({1: 36, 2: 36, 3: 36, 4: 36, 5: 36, 7: 6, 8: 6, 9: 6, 10: 6, 11: 6, 13: 1, 14: 1, 15: 1, 16: 1, 17: 1, 18: 1})

    ```

    See the [``substitute`` method][dyce.h.H.substitute].
    """
    return self.substitute(
        lambda h, outcome: outcome
        if len(h) == 1
        else h
        if outcome == max(h)
        else outcome,
        __add__,
        max_depth,
        precision_limit,
    )

foreach(dependent_term: Callable[..., Union['H', RealLike]], **independent_sources: _SourceT) -> H classmethod

Experimental

This method should be considered experimental and may change or disappear in future versions.

Calls dependent_term for each set of outcomes from the product of independent_sources and accumulates the results. This is useful for resolving dependent probabilities. Returned histograms are always reduced to their lowest terms.

For example rolling a d20, re-rolling a 1 if it comes up, and keeping the result might be expressed as1:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
>>> d20 = H(20)

>>> def reroll_one_dependent_term(d20_outcome):
...   if d20_outcome == 1:
...     return d20
...   else:
...     return d20_outcome

>>> H.foreach(reroll_one_dependent_term, d20_outcome=d20)
H({1: 1,
 2: 21,
 3: 21,
 ...,
 19: 21,
 20: 21})

The foreach class method merely wraps dependent_term and calls P.foreach. In doing so, it imposes a very modest overhead that is negligible in most cases.

1
2
3
4
5
%timeit P.foreach(dependent_term_p, roll_1=H(6), roll_2=H(8), roll_3=H(10), roll_n=H(20))
219 ms ± 5.58 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

%timeit H.foreach(dependent_term_h, outcome_1=H(6), outcome_2=H(8), outcome_3=H(10), outcome_n=H(20))
230 ms ± 6.2 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
Source: perf_foreach.ipy
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
from dyce import H, P

def dependent_term(
  val_1,
  val_2,
  val_3,
  val_n,
):
  import math ; math.gcd(456**123, 123**456)  # emulate an expensive calculation
  return (
    (val_1 == val_2) +
    (val_2 == val_3) +
    (val_1 == val_3) +
    (
      val_n > val_1
      and val_n > val_2
      and val_n > val_3
    )
  )

def dependent_term_h(
  outcome_1,
  outcome_2,
  outcome_3,
  outcome_n,
):
  return dependent_term(outcome_1, outcome_2, outcome_3, outcome_n)

def dependent_term_p(
  roll_1,
  roll_2,
  roll_3,
  roll_n,
):
  return dependent_term(roll_1, roll_2, roll_3, roll_n)

source_1 = H(6)
source_2 = H(8)
source_3 = H(10)
source_n = H(20)

print(f"%timeit P.foreach({dependent_term_p.__name__}, roll_1={source_1}, roll_2={source_2}, roll_3={source_3}, roll_n={source_n})")
%timeit P.foreach(dependent_term_p, roll_1=source_1, roll_2=source_2, roll_3=source_3, roll_n=source_n)
print()

print(f"%timeit H.foreach({dependent_term_h.__name__}, outcome_1={source_1}, outcome_2={source_2}, outcome_3={source_3}, outcome_n={source_n})")
%timeit H.foreach(dependent_term_h, outcome_1=source_1, outcome_2=source_2, outcome_3=source_3, outcome_n=source_n)
print()

  1. This is primarily for illustration. H.substitute is often better suited to cases involving re-rolling a single independent term such as this one. 

Source code in dyce/h.py
@classmethod
@beartype
def foreach(
    cls,
    dependent_term: Callable[..., Union["H", RealLike]],
    **independent_sources: _SourceT,
) -> H:
    r"""
    !!! warning "Experimental"

        This method should be considered experimental and may change or disappear in
        future versions.

    Calls ``#!python dependent_term`` for each set of outcomes from the product of
    ``independent_sources`` and accumulates the results. This is useful for
    resolving dependent probabilities. Returned histograms are always reduced to
    their lowest terms.

    For example rolling a d20, re-rolling a ``#!python 1`` if it comes up, and
    keeping the result might be expressed as[^1]:

    [^1]:

        This is primarily for illustration. [``H.substitute``][dyce.h.H.substitute]
        is often better suited to cases involving re-rolling a single independent
        term such as this one.

    ``` python
    >>> d20 = H(20)

    >>> def reroll_one_dependent_term(d20_outcome):
    ...   if d20_outcome == 1:
    ...     return d20
    ...   else:
    ...     return d20_outcome

    >>> H.foreach(reroll_one_dependent_term, d20_outcome=d20)
    H({1: 1,
     2: 21,
     3: 21,
     ...,
     19: 21,
     20: 21})

    ```

    The ``#!python foreach`` class method merely wraps *dependent_term* and calls
    [``P.foreach``][dyce.p.P.foreach]. In doing so, it imposes a very modest
    overhead that is negligible in most cases.

    ``` python
    --8<-- "docs/assets/perf_foreach.txt"
    ```

    <details>
    <summary>Source: <a href="https://github.com/posita/dyce/blob/latest/docs/assets/perf_foreach.ipy"><code>perf_foreach.ipy</code></a></summary>

    ``` python
    --8<-- "docs/assets/perf_foreach.ipy"
    ```
    </details>
    """
    from dyce import P

    def _dependent_term(**roll_kw):
        outcome_kw: Dict[str, RealLike] = {}

        for key, roll in roll_kw.items():
            assert isinstance(roll, tuple)
            assert len(roll) == 1
            outcome_kw[key] = roll[0]

        return dependent_term(**outcome_kw)

    return P.foreach(_dependent_term, **independent_sources)

format(self, fill_items: Optional[_MappingT] = None, width: SupportsInt = 65, scaled: bool = False, tick: str = '#', sep: str = '\n') -> str

Returns a formatted string representation of the histogram. If provided, fill_items supplies defaults for any missing outcomes. If width is greater than zero, a horizontal bar ASCII graph is printed using tick and sep (which are otherwise ignored if width is zero or less).

1
2
>>> print(H(6).format(width=0))
{avg: 3.50, 1: 16.67%, 2: 16.67%, 3: 16.67%, 4: 16.67%, 5: 16.67%, 6: 16.67%}
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
>>> print((2@H(6)).format(fill_items={i: 0 for i in range(1, 21)}, tick="@"))
avg |    7.00
std |    2.42
var |    5.83
  1 |   0.00% |
  2 |   2.78% |@
  3 |   5.56% |@@
  4 |   8.33% |@@@@
  5 |  11.11% |@@@@@
  6 |  13.89% |@@@@@@
  7 |  16.67% |@@@@@@@@
  8 |  13.89% |@@@@@@
  9 |  11.11% |@@@@@
 10 |   8.33% |@@@@
 11 |   5.56% |@@
 12 |   2.78% |@
 13 |   0.00% |
 14 |   0.00% |
 15 |   0.00% |
 16 |   0.00% |
 17 |   0.00% |
 18 |   0.00% |
 19 |   0.00% |
 20 |   0.00% |

If scaled is True, horizontal bars are scaled to width.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
>>> h = (2@H(6)).ge(7)
>>> print(f"{' 65 chars wide -->|':->65}")
---------------------------------------------- 65 chars wide -->|
>>> print(h.format(scaled=False))
avg |    0.58
std |    0.49
var |    0.24
  0 |  41.67% |####################
  1 |  58.33% |#############################
>>> print(h.format(scaled=True))
avg |    0.58
std |    0.49
var |    0.24
  0 |  41.67% |###################################
  1 |  58.33% |##################################################
Source code in dyce/h.py
@beartype
def format(
    self,
    fill_items: Optional[_MappingT] = None,
    width: SupportsInt = _ROW_WIDTH,
    scaled: bool = False,
    tick: str = "#",
    sep: str = os.linesep,
) -> str:
    r"""
    Returns a formatted string representation of the histogram. If provided,
    *fill_items* supplies defaults for any missing outcomes. If *width* is greater
    than zero, a horizontal bar ASCII graph is printed using *tick* and *sep* (which
    are otherwise ignored if *width* is zero or less).

    ``` python
    >>> print(H(6).format(width=0))
    {avg: 3.50, 1: 16.67%, 2: 16.67%, 3: 16.67%, 4: 16.67%, 5: 16.67%, 6: 16.67%}

    ```

    ``` python
    >>> print((2@H(6)).format(fill_items={i: 0 for i in range(1, 21)}, tick="@"))
    avg |    7.00
    std |    2.42
    var |    5.83
      1 |   0.00% |
      2 |   2.78% |@
      3 |   5.56% |@@
      4 |   8.33% |@@@@
      5 |  11.11% |@@@@@
      6 |  13.89% |@@@@@@
      7 |  16.67% |@@@@@@@@
      8 |  13.89% |@@@@@@
      9 |  11.11% |@@@@@
     10 |   8.33% |@@@@
     11 |   5.56% |@@
     12 |   2.78% |@
     13 |   0.00% |
     14 |   0.00% |
     15 |   0.00% |
     16 |   0.00% |
     17 |   0.00% |
     18 |   0.00% |
     19 |   0.00% |
     20 |   0.00% |

    ```

    If *scaled* is ``#!python True``, horizontal bars are scaled to *width*.

    ``` python
    >>> h = (2@H(6)).ge(7)
    >>> print(f"{' 65 chars wide -->|':->65}")
    ---------------------------------------------- 65 chars wide -->|
    >>> print(h.format(scaled=False))
    avg |    0.58
    std |    0.49
    var |    0.24
      0 |  41.67% |####################
      1 |  58.33% |#############################
    >>> print(h.format(scaled=True))
    avg |    0.58
    std |    0.49
    var |    0.24
      0 |  41.67% |###################################
      1 |  58.33% |##################################################

    ```
    """
    width = as_int(width)

    # We convert various values herein to native ints and floats because number
    # tower implementations sometimes neglect to implement __format__ properly (or
    # at all). (I'm looking at you, sage.rings.…!)
    try:
        mu: RealLike = float(self.mean())
    except (OverflowError, TypeError):
        mu = self.mean()

    if width <= 0:

        def _parts() -> Iterator[str]:
            yield f"avg: {mu:.2f}"

            for (
                outcome,
                probability,
            ) in self.distribution(fill_items):
                probability_f = float(probability)
                yield f"{outcome}:{probability_f:7.2%}"

        return "{" + ", ".join(_parts()) + "}"
    else:
        w = width - 15

        def _lines() -> Iterator[str]:
            try:
                yield f"avg | {mu:7.2f}"
                std = float(self.stdev(mu))
                var = float(self.variance(mu))
                yield f"std | {std:7.2f}"
                yield f"var | {var:7.2f}"
            except (OverflowError, TypeError) as exc:
                warnings.warn(f"{str(exc)}; mu: {mu}")

            if self:
                outcomes, probabilities = self.distribution_xy(fill_items)
                tick_scale = max(probabilities) if scaled else 1.0

                for outcome, probability in zip(outcomes, probabilities):
                    try:
                        outcome_str = f"{outcome: 3}"
                    except (TypeError, ValueError):
                        outcome_str = str(outcome)
                        outcome_str = f"{outcome_str: >3}"

                    ticks = tick * int(w * probability / tick_scale)
                    probability_f = float(probability)
                    yield f"{outcome_str} | {probability_f:7.2%} |{ticks}"

        return sep.join(_lines())

ge(self, other: _OperandT) -> H

Shorthand for self.map(operator.__ge__, other).umap(bool).

1
2
>>> H(6).ge(3)
H({False: 2, True: 4})

See the map and umap methods.

Source code in dyce/h.py
@beartype
def ge(self, other: _OperandT) -> H:
    r"""
    Shorthand for ``#!python self.map(operator.__ge__, other).umap(bool)``.

    ``` python
    >>> H(6).ge(3)
    H({False: 2, True: 4})

    ```

    See the [``map``][dyce.h.H.map] and [``umap``][dyce.h.H.umap] methods.
    """
    return self.map(__ge__, other).umap(bool)

gt(self, other: _OperandT) -> H

Shorthand for self.map(operator.__gt__, other).umap(bool).

1
2
>>> H(6).gt(3)
H({False: 3, True: 3})

See the map and umap methods.

Source code in dyce/h.py
@beartype
def gt(self, other: _OperandT) -> H:
    r"""
    Shorthand for ``#!python self.map(operator.__gt__, other).umap(bool)``.

    ``` python
    >>> H(6).gt(3)
    H({False: 3, True: 3})

    ```

    See the [``map``][dyce.h.H.map] and [``umap``][dyce.h.H.umap] methods.
    """
    return self.map(__gt__, other).umap(bool)

is_even(self) -> H

Equivalent to self.umap(dyce.types.is_even).

1
2
>>> H((-4, -2, 0, 1, 2, 3)).is_even()
H({False: 2, True: 4})

See the umap method.

Source code in dyce/h.py
@beartype
def is_even(self) -> H:
    r"""
    Equivalent to ``#!python self.umap(dyce.types.is_even)``.

    ``` python
    >>> H((-4, -2, 0, 1, 2, 3)).is_even()
    H({False: 2, True: 4})

    ```

    See the [``umap`` method][dyce.h.H.umap].
    """
    return self.umap(is_even)

is_odd(self) -> H

Equivalent to self.umap(dyce.types.is_odd).

1
2
>>> H((-4, -2, 0, 1, 2, 3)).is_odd()
H({False: 4, True: 2})

See the umap method.

Source code in dyce/h.py
@beartype
def is_odd(self) -> H:
    r"""
    Equivalent to ``#!python self.umap(dyce.types.is_odd)``.

    ``` python
    >>> H((-4, -2, 0, 1, 2, 3)).is_odd()
    H({False: 4, True: 2})

    ```

    See the [``umap`` method][dyce.h.H.umap].
    """
    return self.umap(is_odd)

items(self) -> ItemsView[RealLike, int]

D.items() -> a set-like object providing a view on D's items

Source code in dyce/h.py
@beartype
def items(self) -> ItemsView[RealLike, int]:
    return self._h.items()

keys(self) -> KeysView[RealLike]

D.keys() -> a set-like object providing a view on D's keys

Source code in dyce/h.py
@beartype
def keys(self) -> KeysView[RealLike]:
    return self.outcomes()

le(self, other: _OperandT) -> H

Shorthand for self.map(operator.__le__, other).umap(bool).

1
2
>>> H(6).le(3)
H({False: 3, True: 3})

See the map and umap methods.

Source code in dyce/h.py
@beartype
def le(self, other: _OperandT) -> H:
    r"""
    Shorthand for ``#!python self.map(operator.__le__, other).umap(bool)``.

    ``` python
    >>> H(6).le(3)
    H({False: 3, True: 3})

    ```

    See the [``map``][dyce.h.H.map] and [``umap``][dyce.h.H.umap] methods.
    """
    return self.map(__le__, other).umap(bool)

lowest_terms(self) -> H

Computes and returns a histogram whose counts share a greatest common divisor of 1.

1
2
3
4
>>> df = H((-1, -1, 0, 0, 1, 1)) ; df
H({-1: 2, 0: 2, 1: 2})
>>> df.lowest_terms()
H({-1: 1, 0: 1, 1: 1})
1
2
3
4
>>> d6avg = H((2, 2, 3, 3, 3, 3, 4, 4, 4, 4, 5, 5)) ; d6avg
H({2: 2, 3: 4, 4: 4, 5: 2})
>>> d6avg.lowest_terms()
H({2: 1, 3: 2, 4: 2, 5: 1})
Source code in dyce/h.py
@beartype
def lowest_terms(self) -> H:
    r"""
    Computes and returns a histogram whose counts share a greatest common divisor of 1.

    ``` python
    >>> df = H((-1, -1, 0, 0, 1, 1)) ; df
    H({-1: 2, 0: 2, 1: 2})
    >>> df.lowest_terms()
    H({-1: 1, 0: 1, 1: 1})

    ```

    ``` python
    >>> d6avg = H((2, 2, 3, 3, 3, 3, 4, 4, 4, 4, 5, 5)) ; d6avg
    H({2: 2, 3: 4, 4: 4, 5: 2})
    >>> d6avg.lowest_terms()
    H({2: 1, 3: 2, 4: 2, 5: 1})

    ```
    """
    return type(self)(self._lowest_terms())

lt(self, other: _OperandT) -> H

Shorthand for self.map(operator.__lt__, other).umap(bool).

1
2
>>> H(6).lt(3)
H({False: 4, True: 2})

See the map and umap methods.

Source code in dyce/h.py
@beartype
def lt(self, other: _OperandT) -> H:
    r"""
    Shorthand for ``#!python self.map(operator.__lt__, other).umap(bool)``.

    ``` python
    >>> H(6).lt(3)
    H({False: 4, True: 2})

    ```

    See the [``map``][dyce.h.H.map] and [``umap``][dyce.h.H.umap] methods.
    """
    return self.map(__lt__, other).umap(bool)

map(self, bin_op: _BinaryOperatorT, right_operand: _OperandT) -> H

Applies bin_op to each outcome of the histogram as the left operand and right_operand as the right. Shorthands exist for many arithmetic operators and comparators.

1
2
3
4
5
6
>>> import operator
>>> d6 = H(6)
>>> d6.map(operator.__add__, d6)
H({2: 1, 3: 2, 4: 3, 5: 4, 6: 5, 7: 6, 8: 5, 9: 4, 10: 3, 11: 2, 12: 1})
>>> d6.map(operator.__add__, d6) == d6 + d6
True
1
2
3
4
>>> d6.map(operator.__pow__, 2)
H({1: 1, 4: 1, 9: 1, 16: 1, 25: 1, 36: 1})
>>> d6.map(operator.__pow__, 2) == d6 ** 2
True
1
2
3
4
>>> d6.map(operator.__gt__, 3)
H({False: 3, True: 3})
>>> d6.map(operator.__gt__, 3) == d6.gt(3)
True
Source code in dyce/h.py
@beartype
def map(self, bin_op: _BinaryOperatorT, right_operand: _OperandT) -> H:
    r"""
    Applies *bin_op* to each outcome of the histogram as the left operand and
    *right_operand* as the right. Shorthands exist for many arithmetic operators and
    comparators.

    ``` python
    >>> import operator
    >>> d6 = H(6)
    >>> d6.map(operator.__add__, d6)
    H({2: 1, 3: 2, 4: 3, 5: 4, 6: 5, 7: 6, 8: 5, 9: 4, 10: 3, 11: 2, 12: 1})
    >>> d6.map(operator.__add__, d6) == d6 + d6
    True

    ```

    ``` python
    >>> d6.map(operator.__pow__, 2)
    H({1: 1, 4: 1, 9: 1, 16: 1, 25: 1, 36: 1})
    >>> d6.map(operator.__pow__, 2) == d6 ** 2
    True

    ```

    ``` python
    >>> d6.map(operator.__gt__, 3)
    H({False: 3, True: 3})
    >>> d6.map(operator.__gt__, 3) == d6.gt(3)
    True

    ```
    """
    if isinstance(right_operand, HableT):
        right_operand = right_operand.h()

    if isinstance(right_operand, H):
        return type(self)(
            (bin_op(s, o), self[s] * right_operand[o])
            for s, o in product(self, right_operand)
        )
    else:
        return type(self)(
            (bin_op(outcome, right_operand), count)
            for outcome, count in self.items()
        )

mean(self) -> RealLike

Returns the mean of the weighted outcomes (or 0.0 if there are no outcomes).

Source code in dyce/h.py
@beartype
def mean(self) -> RealLike:
    r"""
    Returns the mean of the weighted outcomes (or 0.0 if there are no outcomes).
    """
    numerator: float
    denominator: float
    numerator = denominator = 0

    for outcome, count in self.items():
        numerator += outcome * count
        denominator += count

    return numerator / (denominator or 1)

ne(self, other: _OperandT) -> H

Shorthand for self.map(operator.__ne__, other).umap(bool).

1
2
>>> H(6).ne(3)
H({False: 1, True: 5})

See the map and umap methods.

Source code in dyce/h.py
@beartype
def ne(self, other: _OperandT) -> H:
    r"""
    Shorthand for ``#!python self.map(operator.__ne__, other).umap(bool)``.

    ``` python
    >>> H(6).ne(3)
    H({False: 1, True: 5})

    ```

    See the [``map``][dyce.h.H.map] and [``umap``][dyce.h.H.umap] methods.
    """
    return self.map(__ne__, other).umap(bool)

order_stat_for_n_at_pos(self, n: SupportsInt, pos: SupportsInt) -> H

Experimental

This method should be considered experimental and may change or disappear in future versions.

Shorthand for self.order_stat_func_for_n(n)(pos).

Source code in dyce/h.py
@experimental
@beartype
def order_stat_for_n_at_pos(self, n: SupportsInt, pos: SupportsInt) -> H:
    r"""
    !!! warning "Experimental"

        This method should be considered experimental and may change or disappear in
        future versions.

    Shorthand for ``#!python self.order_stat_func_for_n(n)(pos)``.
    """
    # TODO(posita): Explore different memoization strategies (e.g., with
    # functools.cache) for this method and H.order_stat_func_for_n
    return self.order_stat_func_for_n(n)(pos)

order_stat_func_for_n(self, n: SupportsInt) -> Callable[[SupportsInt], 'H']

Experimental

This method should be considered experimental and may change or disappear in future versions.

Returns a function that takes a single argument (pos) and computes the probability distribution for each outcome appearing in that position among n@self.

1
2
3
4
>>> d6avg = H((2, 3, 3, 4, 4, 5))
>>> order_stat_for_5d6avg = d6avg.order_stat_func_for_n(5)
>>> order_stat_for_5d6avg(3)  # counts where outcome appears at index 3
H({2: 26, 3: 1432, 4: 4792, 5: 1526})

The results show that, when rolling five six-sided “averaging” dice and sorting each roll, there are 26 ways where 2 appears at the fourth (index 3) position, 1432 ways where 3 appears at the fourth position, etc. This can be verified independently using the computationally expensive method of enumerating rolls and counting those that meet the criteria.

1
2
3
4
>>> from dyce import P
>>> p_5d6avg = 5@P(d6avg)
>>> sum(count for roll, count in p_5d6avg.rolls_with_counts() if roll[3] == 5)
1526

This method exists in addition to the H.order_stat_for_n_at_pos method because computing the betas for each outcome in n is unnecessary for each pos. Where different pos values are needed for the same n (e.g., in a loop) and where n is large, that overhead can be significant. The returned function caches those betas for n such that repeated querying or results at pos can be computed much faster.

1
2
3
4
5
%timeit [H(6).order_stat_for_n_at_pos(30, i) for i in range(10)]
462 ms ± 16.9 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

%timeit order_stat_for_30d6_at_pos = H(6).order_stat_func_for_n(30) ; [order_stat_for_30d6_at_pos(i) for i in range(10)]
54.4 ms ± 1.47 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
Source: perf_order_stat_for_n.ipy
1
2
3
4
5
6
7
8
9
from dyce import H

print(f"%timeit [H(6).order_stat_for_n_at_pos(30, i) for i in range(10)]")
%timeit [H(6).order_stat_for_n_at_pos(30, i) for i in range(10)]
print()

print(f"%timeit order_stat_for_30d6_at_pos = H(6).order_stat_func_for_n(30) ; [order_stat_for_30d6_at_pos(i) for i in range(10)]")
%timeit order_stat_for_30d6_at_pos = H(6).order_stat_func_for_n(30) ; [order_stat_for_30d6_at_pos(i) for i in range(10)]
print()
Source code in dyce/h.py
@experimental
@beartype
def order_stat_func_for_n(self, n: SupportsInt) -> Callable[[SupportsInt], "H"]:
    r"""
    !!! warning "Experimental"

        This method should be considered experimental and may change or disappear in
        future versions.

    Returns a function that takes a single argument (*pos*) and computes the
    probability distribution for each outcome appearing in that position among
    ``#!python n@self``.

    ``` python
    >>> d6avg = H((2, 3, 3, 4, 4, 5))
    >>> order_stat_for_5d6avg = d6avg.order_stat_func_for_n(5)
    >>> order_stat_for_5d6avg(3)  # counts where outcome appears at index 3
    H({2: 26, 3: 1432, 4: 4792, 5: 1526})

    ```

    The results show that, when rolling five six-sided “averaging” dice and sorting
    each roll, there are 26 ways where ``#!python 2`` appears at the fourth (index
    ``#!python 3``) position, 1432 ways where ``#!python 3`` appears at the fourth
    position, etc. This can be verified independently using the computationally
    expensive method of enumerating rolls and counting those that meet the criteria.

    ``` python
    >>> from dyce import P
    >>> p_5d6avg = 5@P(d6avg)
    >>> sum(count for roll, count in p_5d6avg.rolls_with_counts() if roll[3] == 5)
    1526

    ```

    This method exists in addition to the
    [``H.order_stat_for_n_at_pos`` method][dyce.h.H.order_stat_for_n_at_pos] because
    computing the betas for each outcome in *n* is unnecessary for each *pos*. Where
    different *pos* values are needed for the same *n* (e.g., in a loop) and where
    *n* is large, that overhead can be significant. The returned function caches
    those betas for *n* such that repeated querying or results at *pos* can be
    computed much faster.

    ``` python
    --8<-- "docs/assets/perf_order_stat_for_n.txt"
    ```

    <details>
    <summary>Source: <a href="https://github.com/posita/dyce/blob/latest/docs/assets/perf_order_stat_for_n.ipy"><code>perf_order_stat_for_n.ipy</code></a></summary>

    ``` python
    --8<-- "docs/assets/perf_order_stat_for_n.ipy"
    ```
    </details>
    """
    betas_by_outcome: Dict[RealLike, Tuple[H, H]] = {}

    for outcome in self.outcomes():
        betas_by_outcome[outcome] = (
            n @ self.le(outcome),
            n @ self.lt(outcome),
        )

    def _gen_h_items_at_pos(pos: int) -> Iterator[Tuple[RealLike, int]]:
        for outcome, (h_le, h_lt) in betas_by_outcome.items():
            yield (
                outcome,
                h_le.gt(pos).get(True, 0) - h_lt.gt(pos).get(True, 0),
            )

    @beartype
    def order_stat_for_n_at_pos(pos: SupportsInt) -> H:
        return type(self)(_gen_h_items_at_pos(as_int(pos)))

    return order_stat_for_n_at_pos

outcomes(self) -> KeysView[RealLike]

More descriptive synonym for the keys method.

Source code in dyce/h.py
@beartype
def outcomes(self) -> KeysView[RealLike]:
    r"""
    More descriptive synonym for the [``keys`` method][dyce.h.H.keys].
    """
    return self._h.keys()

rmap(self, left_operand: RealLike, bin_op: _BinaryOperatorT) -> H

Analogous to the map method, but where the caller supplies left_operand.

1
2
3
4
5
6
>>> import operator
>>> d6 = H(6)
>>> d6.rmap(2, operator.__pow__)
H({2: 1, 4: 1, 8: 1, 16: 1, 32: 1, 64: 1})
>>> d6.rmap(2, operator.__pow__) == 2 ** d6
True

Note

The positions of left_operand and bin_op are different from map method. This is intentional and serves as a reminder of operand ordering.

Source code in dyce/h.py
@beartype
def rmap(self, left_operand: RealLike, bin_op: _BinaryOperatorT) -> H:
    r"""
    Analogous to the [``map`` method][dyce.h.H.map], but where the caller supplies
    *left_operand*.

    ``` python
    >>> import operator
    >>> d6 = H(6)
    >>> d6.rmap(2, operator.__pow__)
    H({2: 1, 4: 1, 8: 1, 16: 1, 32: 1, 64: 1})
    >>> d6.rmap(2, operator.__pow__) == 2 ** d6
    True

    ```

    !!! note

        The positions of *left_operand* and *bin_op* are different from
        [``map`` method][dyce.h.H.map]. This is intentional and serves as a reminder
        of operand ordering.
    """
    return type(self)(
        (bin_op(left_operand, outcome), count) for outcome, count in self.items()
    )

roll(self) -> RealLike

Returns a (weighted) random outcome, sorted.

Source code in dyce/h.py
@beartype
def roll(self) -> RealLike:
    r"""
    Returns a (weighted) random outcome, sorted.
    """
    return (
        rng.RNG.choices(
            population=tuple(self.outcomes()),
            weights=tuple(self.counts()),
            k=1,
        )[0]
        if self
        else 0
    )

stdev(self, mu: Optional[RealLike] = None) -> RealLike

Shorthand for math.sqrt(self.variance(mu)).

Source code in dyce/h.py
@beartype
def stdev(self, mu: Optional[RealLike] = None) -> RealLike:
    r"""
    Shorthand for ``#!python math.sqrt(self.variance(mu))``.
    """
    return sqrt(self.variance(mu))

substitute(self, expand: _ExpandT, coalesce: _CoalesceT = <function coalesce_replace at 0x102ed6040>, max_depth: SupportsInt = 1, precision_limit: Fraction = Fraction(0, 1)) -> H

Experimental

The precision_limit parameter should be considered experimental and may change or disappear in future versions.

Calls expand on each outcome. If expand returns a single outcome, it replaces the existing outcome. If it returns an H object, expansion is performed again (recursively) on that object until max_depth or precision_limit is exhausted. coalesce is called on the original outcome and the expanded histogram or outcome and the returned histogram is “folded” into result. (More on these terms and concepts below.)

The default behavior for coalesce is to replace the outcome with the expanded histogram. Returned histograms are always reduced to their lowest terms.

coalesce is not called unless expand returns a histogram

If expand returns a single outcome, it always replaces the existing outcome. This is intentional. To return a single outcome, but trigger coalesce, characterize that outcome as a single-sided die (e.g., H({outcome: 1}).

See the coalesce_replace and lowest_terms methods.

This method can be used to model complex mechanics. The following models re-rolling a face of 1 on the first roll:

1
2
3
4
5
>>> def reroll_one(h: H, outcome):
...   return h if outcome == 1 else outcome

>>> H(6).substitute(reroll_one)
H({1: 1, 2: 7, 3: 7, 4: 7, 5: 7, 6: 7})

See the explode method for a common shorthand for “exploding” dice (i.e., where, if the greatest face come up, the die is re-rolled and the result is added to a running sum).

This method uses the aggregate_with_counts function in its implementation. As such, If coalesce returns the empty histogram (H({})), the corresponding outcome and its counts are omitted from the result without substitution or scaling. A silly example is modeling a d5 by indefinitely re-rolling a d6 until something other than a 6 comes up.

1
2
>>> H(6).substitute(lambda __, outcome: H({}) if outcome == 6 else outcome)
H({1: 1, 2: 1, 3: 1, 4: 1, 5: 1})

This technique is more useful when modeling re-rolling certain derived outcomes, like ties in a contest.

1
2
3
4
5
>>> d6_3, d8_2 = 3@H(6), 2@H(8)
>>> d6_3.vs(d8_2)
H({-1: 4553, 0: 1153, 1: 8118})
>>> d6_3.vs(d8_2).substitute(lambda __, outcome: H({}) if outcome == 0 else outcome)
H({-1: 4553, 1: 8118})

Because it delegates to a callback for refereeing substitution decisions, substitute is quite flexible and well suited to modeling (or at least approximating) logical progressions with dependent variables. Consider the following mechanic:

  1. Start with a total of zero.
  2. Roll a six-sided die. Add the face to the total. If the face was a six, go to step 3. Otherwise stop.
  3. Roll a four-sided die. Add the face to the total. If the face was a four, go to step 2. Otherwise stop.

What is the likelihood of an even final tally? This can be approximated by:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
>>> d4, d6 = H(4), H(6)

>>> def reroll_greatest_on_d4_d6(h: H, outcome):
...   if outcome == max(h):
...     if h == d6: return d4
...     if h == d4: return d6
...   return outcome

>>> import operator
>>> h = d6.substitute(reroll_greatest_on_d4_d6, operator.__add__, max_depth=6)
>>> h_even = h.is_even()
>>> print(f"{h_even[1] / h_even.total:.3%}")
39.131%

Surprised? Because both six and four are even numbers, the only way we keep rolling is if the total is even. You might think this would lead to evens being more likely. However, we only care about the final tally and the rules direct us to re-roll certain evens (nudging us toward an odd number more often than not).

We can also use this method to model expected damage from a single attack in d20-like role playing games.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
>>> bonus = 1
>>> dmg_dice = H(8)
>>> dmg = dmg_dice + bonus
>>> crit = dmg + dmg_dice
>>> target = 15 - bonus
>>> d20 = H(20)

>>> def dmg_from_attack_roll(h: H, outcome):
...   if outcome == 20:
...     return crit
...   elif outcome >= target:
...     return dmg
...   else:
...     return 0

>>> h = d20.substitute(dmg_from_attack_roll)
>>> print(h.format(scaled=True))
avg |    2.15
std |    3.40
var |   11.55
  0 |  65.00% |##################################################
  2 |   3.75% |##
  3 |   3.83% |##
  4 |   3.91% |###
  5 |   3.98% |###
  6 |   4.06% |###
  7 |   4.14% |###
  8 |   4.22% |###
  9 |   4.30% |###
 10 |   0.62% |
 11 |   0.55% |
 12 |   0.47% |
 13 |   0.39% |
 14 |   0.31% |
 15 |   0.23% |
 16 |   0.16% |
 17 |   0.08% |

When expand returns an H object, outcomes produced from the corresponding coalesce are accumulated, but the counts retain their “scale” within the context of the expansion. This becomes clearer when there is no overlap between the substituted histogram and the other outcomes.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
>>> d6 = H(6)
>>> d00 = (H(10) - 1) * 10 ; d00
H({0: 1, 10: 1, 20: 1, 30: 1, 40: 1, 50: 1, 60: 1, 70: 1, 80: 1, 90: 1})
>>> set(d6) & set(d00) == set()  # no outcomes in common
True
>>> d6_d00 = d6.substitute(
...   # If a one comes up when rolling the d6,
...   # roll a d00 and take that result instead
...   lambda h, outcome: d00 if outcome == 1 else outcome
... ) ; d6_d00
H({0: 1,
 2: 10,
 3: 10,
 4: 10,
 5: 10,
 6: 10,
 10: 1,
 20: 1,
 30: 1,
 40: 1,
 50: 1,
 60: 1,
 70: 1,
 80: 1,
 90: 1})

Note that the sum of the outcomes’ counts from the d00 make up the same proportion as the one’s outcome and count they replaced from the d6.

1
2
3
4
5
6
7
8
>>> from fractions import Fraction
>>> Fraction(
...   sum(count for outcome, count in d6_d00.items() if outcome in d00),
...   d6_d00.total,
... )
Fraction(1, 6)
>>> Fraction(d6[1], d6.total)
Fraction(1, 6)

Precision limits

This method will halt recursive substitution on any branch either when its depth exceeds max_depth or its “contextual precision” is precision_limit or less. In either case, substitution is attempted for all of the outcomes of a(n expanded) histogram or none of them. The contextual precision of a histogram is its proportion to the whole. The contextual precision of the original (or top-level) histogram is Fraction(1,1). By setting precision_limit to that value, we basically ensure no substitution.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
>>> d6.substitute(
...   lambda h, outcome: d00 if outcome == 1 else outcome,
...   precision_limit=Fraction(1, 1),  # no substitution
... ) == d6
True
>>> d6.substitute(
...   lambda h, outcome: d00 if outcome == 1 else outcome,
...   max_depth=0,  # no substitution
... ) == d6
True

Let’s make a contrived, but illustrative modification to our d6/d00 example from above. If a one comes up when rolling a d6, roll a d00, but re-roll any 80s.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
>>> d6.substitute(
...   lambda h, outcome: d00 if outcome in (1, 80) else outcome,
...   max_depth=100,  # <-- we'll never hit this
...   # will halt substitution after the original one from the d6
...   precision_limit=Fraction(1, 6),
... ) == d6_d00
True
>>> d6.substitute(
...   lambda h, outcome: d00 if outcome in (1, 80) else outcome,
...   max_depth=100,
...   # will halt substitution after the first 80 substitution
...   # after the original one from the d6
...   precision_limit=Fraction(1, 6) * Fraction(1, 10),
... )
H({0: 11,
 2: 100,
 ...,
 6: 100,
 10: 11,
 ...,
 70: 11,
 80: 1,
 90: 11})
>>> d6.substitute(
...   lambda h, outcome: d00 if outcome in (1, 80) else outcome,
...   max_depth=100,
...   # will halt substitution after the second 80 substitution
...   # after the original one from the d6
...   precision_limit=Fraction(1, 6) * Fraction(1, 10) - Fraction(1, 1000000000),  # <-- juuust under the wire
... )
H({0: 111,
 2: 1000,
 ...,
 6: 1000,
 10: 111,
 ...,
 70: 111,
 80: 1,
 90: 111})

The default value for precision_limit is zero, which basically means it is ignored and recursion is limited solely by max_depth. If you want to ensure that this method stops delving based solely on precision, set max_depth to -1, which is equivalent to sys.getrecursionlimit() + 11. Be aware that this skews results in favor of non-limited branches.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
>>> h = H({1: 1, 2: 2, 3: 3})
>>> print(h.explode(max_depth=5).format(scaled=True))
avg |    4.59
std |    3.96
var |   15.65
  1 |  16.67% |########################
  2 |  33.33% |#################################################
  4 |   8.33% |############
  5 |  16.67% |########################
  7 |   4.17% |######
  8 |   8.33% |############
 10 |   2.08% |###
 11 |   4.17% |######
 13 |   1.04% |#
 14 |   2.08% |###
 16 |   0.52% |
 17 |   1.04% |#
 18 |   1.56% |##
>>> print(h.explode(max_depth=-1, precision_limit=Fraction(1, 6 ** 2)).format(scaled=True))
avg |    4.63
std |    4.09
var |   16.72
  1 |  16.67% |########################
  2 |  33.33% |#################################################
  4 |   8.33% |############
  5 |  16.67% |########################
  7 |   4.17% |######
  8 |   8.33% |############
 10 |   2.08% |###
 11 |   4.17% |######
 13 |   1.04% |#
 14 |   2.08% |###
 16 |   0.52% |
 17 |   1.04% |#
 19 |   0.26% |
 20 |   0.52% |
 21 |   0.78% |#

Also be aware that without max_depth as a safety net, some substitutions are guaranteed to result in RecursionErrors, even with very high precision_limits.

1
2
3
4
5
6
7
8
>>> H(1).substitute(
...   lambda h, outcome: H({outcome + 1: 1}),  # expands to a single-sided die
...   max_depth=-1,
...   precision_limit=Fraction(999999, 1000000),
... )
Traceback (most recent call last):
  ...
RecursionError: maximum recursion depth exceeded in comparison

H.explode’s expand implementation guards against this by returning outcome if the passed histogram has only one face. Consider a similar approach for your own expand implementations if outcomes’ contextual probabilities do not asymptotically approach zero.


  1. This method will “bottom out” far earlier. As of this writing, the practical limit of its implementation (without optimization) is something close to \(\frac {1} {3} \times \left( limit - depth \right)\), where \(limit\) is sys.getrecursionlimit() and \(depth\) is len(inspect.stack(0)). This also assumes the provided implementations for expand and coalesce don’t contribute significantly to the call stack. Setting max_depth to -1 or one beyond the absolute limit signals that the caller wants it out of the way. 

Source code in dyce/h.py
@beartype
def substitute(
    self,
    expand: _ExpandT,
    coalesce: _CoalesceT = coalesce_replace,
    max_depth: SupportsInt = 1,
    precision_limit: Fraction = Fraction(0),
) -> H:
    r"""
    !!! warning "Experimental"

        The *precision_limit* parameter should be considered experimental and may
        change or disappear in future versions.

    Calls *expand* on each outcome. If *expand* returns a single outcome, it
    replaces the existing outcome. If it returns an [``H`` object][dyce.h.H],
    expansion is performed again (recursively) on that object until *max_depth* or
    *precision_limit* is exhausted. *coalesce* is called on the original outcome and
    the expanded histogram or outcome and the returned histogram is “folded” into
    result. (More on these terms and concepts below.)

    The default behavior for *coalesce* is to replace the outcome with the expanded
    histogram. Returned histograms are always reduced to their lowest terms.

    !!! note "*coalesce* is not called unless *expand* returns a histogram"

        If *expand* returns a single outcome, it *always* replaces the existing
        outcome. This is intentional. To return a single outcome, but trigger
        *coalesce*, characterize that outcome as a single-sided die (e.g.,
        ``#!python H({outcome: 1})``.

    See the [``coalesce_replace``][dyce.h.coalesce_replace] and
    [``lowest_terms``][dyce.h.H.lowest_terms] methods.

    This method can be used to model complex mechanics. The following models
    re-rolling a face of 1 on the first roll:

    ``` python
    >>> def reroll_one(h: H, outcome):
    ...   return h if outcome == 1 else outcome

    >>> H(6).substitute(reroll_one)
    H({1: 1, 2: 7, 3: 7, 4: 7, 5: 7, 6: 7})

    ```

    See the [``explode`` method][dyce.h.H.explode] for a common shorthand for
    “exploding” dice (i.e., where, if the greatest face come up, the die is
    re-rolled and the result is added to a running sum).

    This method uses the [``aggregate_with_counts``][dyce.h.aggregate_with_counts]
    function in its implementation. As such, If *coalesce* returns the empty
    histogram (``H({})``), the corresponding outcome and its counts are omitted from
    the result without substitution or scaling. A silly example is modeling a d5 by
    indefinitely re-rolling a d6 until something other than a 6 comes up.

    ``` python
    >>> H(6).substitute(lambda __, outcome: H({}) if outcome == 6 else outcome)
    H({1: 1, 2: 1, 3: 1, 4: 1, 5: 1})

    ```

    This technique is more useful when modeling re-rolling certain derived
    outcomes, like ties in a contest.

    ``` python
    >>> d6_3, d8_2 = 3@H(6), 2@H(8)
    >>> d6_3.vs(d8_2)
    H({-1: 4553, 0: 1153, 1: 8118})
    >>> d6_3.vs(d8_2).substitute(lambda __, outcome: H({}) if outcome == 0 else outcome)
    H({-1: 4553, 1: 8118})

    ```

    Because it delegates to a callback for refereeing substitution decisions,
    ``#!python substitute`` is quite flexible and well suited to modeling (or at
    least approximating) logical progressions with dependent variables. Consider the
    following mechanic:

      1. Start with a total of zero.
      2. Roll a six-sided die. Add the face to the total. If the face was a six, go
         to step 3. Otherwise stop.
      3. Roll a four-sided die. Add the face to the total. If the face was a four,
         go to step 2. Otherwise stop.

    What is the likelihood of an even final tally? This can be approximated by:

    ``` python
    >>> d4, d6 = H(4), H(6)

    >>> def reroll_greatest_on_d4_d6(h: H, outcome):
    ...   if outcome == max(h):
    ...     if h == d6: return d4
    ...     if h == d4: return d6
    ...   return outcome

    >>> import operator
    >>> h = d6.substitute(reroll_greatest_on_d4_d6, operator.__add__, max_depth=6)
    >>> h_even = h.is_even()
    >>> print(f"{h_even[1] / h_even.total:.3%}")
    39.131%

    ```

    Surprised? Because both six and four are even numbers, the only way we keep
    rolling is if the total is even. You might think this would lead to evens being
    *more* likely. However, we only care about the final tally and the rules direct
    us to re-roll certain evens (nudging us toward an odd number more often than
    not).

    We can also use this method to model expected damage from a single attack in
    d20-like role playing games.

    ``` python
    >>> bonus = 1
    >>> dmg_dice = H(8)
    >>> dmg = dmg_dice + bonus
    >>> crit = dmg + dmg_dice
    >>> target = 15 - bonus
    >>> d20 = H(20)

    >>> def dmg_from_attack_roll(h: H, outcome):
    ...   if outcome == 20:
    ...     return crit
    ...   elif outcome >= target:
    ...     return dmg
    ...   else:
    ...     return 0

    >>> h = d20.substitute(dmg_from_attack_roll)
    >>> print(h.format(scaled=True))
    avg |    2.15
    std |    3.40
    var |   11.55
      0 |  65.00% |##################################################
      2 |   3.75% |##
      3 |   3.83% |##
      4 |   3.91% |###
      5 |   3.98% |###
      6 |   4.06% |###
      7 |   4.14% |###
      8 |   4.22% |###
      9 |   4.30% |###
     10 |   0.62% |
     11 |   0.55% |
     12 |   0.47% |
     13 |   0.39% |
     14 |   0.31% |
     15 |   0.23% |
     16 |   0.16% |
     17 |   0.08% |

    ```

    When *expand* returns an [``H`` object][dyce.h.H], outcomes produced from the
    corresponding *coalesce* are accumulated, but the counts retain their “scale”
    within the context of the expansion. This becomes clearer when there is no
    overlap between the substituted histogram and the other outcomes.

    ``` python
    >>> d6 = H(6)
    >>> d00 = (H(10) - 1) * 10 ; d00
    H({0: 1, 10: 1, 20: 1, 30: 1, 40: 1, 50: 1, 60: 1, 70: 1, 80: 1, 90: 1})
    >>> set(d6) & set(d00) == set()  # no outcomes in common
    True
    >>> d6_d00 = d6.substitute(
    ...   # If a one comes up when rolling the d6,
    ...   # roll a d00 and take that result instead
    ...   lambda h, outcome: d00 if outcome == 1 else outcome
    ... ) ; d6_d00
    H({0: 1,
     2: 10,
     3: 10,
     4: 10,
     5: 10,
     6: 10,
     10: 1,
     20: 1,
     30: 1,
     40: 1,
     50: 1,
     60: 1,
     70: 1,
     80: 1,
     90: 1})

    ```

    Note that the sum of the outcomes’ counts from the d00 make up the same
    proportion as the one’s outcome and count they replaced from the d6.

    ``` python
    >>> from fractions import Fraction
    >>> Fraction(
    ...   sum(count for outcome, count in d6_d00.items() if outcome in d00),
    ...   d6_d00.total,
    ... )
    Fraction(1, 6)
    >>> Fraction(d6[1], d6.total)
    Fraction(1, 6)

    ```

    !!! tip "Precision limits"

        This method will halt recursive substitution on any branch *either* when its
        depth exceeds *max_depth* *or* its “contextual precision” is
        *precision_limit* or less. In either case, substitution is attempted for all
        of the outcomes of a(n expanded) histogram or none of them. The contextual
        precision of a histogram is its proportion to the whole. The contextual
        precision of the original (or top-level) histogram is ``#!python Fraction(1,
        1)``. By setting *precision_limit* to that value, we basically ensure no
        substitution.

        ``` python
        >>> d6.substitute(
        ...   lambda h, outcome: d00 if outcome == 1 else outcome,
        ...   precision_limit=Fraction(1, 1),  # no substitution
        ... ) == d6
        True
        >>> d6.substitute(
        ...   lambda h, outcome: d00 if outcome == 1 else outcome,
        ...   max_depth=0,  # no substitution
        ... ) == d6
        True

        ```

        Let’s make a contrived, but illustrative modification to our d6/d00 example
        from above. If a one comes up when rolling a d6, roll a d00, but re-roll any
        80s.

        ``` python
        >>> d6.substitute(
        ...   lambda h, outcome: d00 if outcome in (1, 80) else outcome,
        ...   max_depth=100,  # <-- we'll never hit this
        ...   # will halt substitution after the original one from the d6
        ...   precision_limit=Fraction(1, 6),
        ... ) == d6_d00
        True
        >>> d6.substitute(
        ...   lambda h, outcome: d00 if outcome in (1, 80) else outcome,
        ...   max_depth=100,
        ...   # will halt substitution after the first 80 substitution
        ...   # after the original one from the d6
        ...   precision_limit=Fraction(1, 6) * Fraction(1, 10),
        ... )
        H({0: 11,
         2: 100,
         ...,
         6: 100,
         10: 11,
         ...,
         70: 11,
         80: 1,
         90: 11})
        >>> d6.substitute(
        ...   lambda h, outcome: d00 if outcome in (1, 80) else outcome,
        ...   max_depth=100,
        ...   # will halt substitution after the second 80 substitution
        ...   # after the original one from the d6
        ...   precision_limit=Fraction(1, 6) * Fraction(1, 10) - Fraction(1, 1000000000),  # <-- juuust under the wire
        ... )
        H({0: 111,
         2: 1000,
         ...,
         6: 1000,
         10: 111,
         ...,
         70: 111,
         80: 1,
         90: 111})

        ```

        The default value for *precision_limit* is zero, which basically means it is
        ignored and recursion is limited solely by *max_depth*. If you want to
        ensure that this method stops delving based *solely* on precision, set
        *max_depth* to ``#!python -1``, which is equivalent to ``#!python
        sys.getrecursionlimit() + 1``[^1]. Be aware that this skews results in favor
        of non-limited branches.

        ``` python
        >>> h = H({1: 1, 2: 2, 3: 3})
        >>> print(h.explode(max_depth=5).format(scaled=True))
        avg |    4.59
        std |    3.96
        var |   15.65
          1 |  16.67% |########################
          2 |  33.33% |#################################################
          4 |   8.33% |############
          5 |  16.67% |########################
          7 |   4.17% |######
          8 |   8.33% |############
         10 |   2.08% |###
         11 |   4.17% |######
         13 |   1.04% |#
         14 |   2.08% |###
         16 |   0.52% |
         17 |   1.04% |#
         18 |   1.56% |##
        >>> print(h.explode(max_depth=-1, precision_limit=Fraction(1, 6 ** 2)).format(scaled=True))
        avg |    4.63
        std |    4.09
        var |   16.72
          1 |  16.67% |########################
          2 |  33.33% |#################################################
          4 |   8.33% |############
          5 |  16.67% |########################
          7 |   4.17% |######
          8 |   8.33% |############
         10 |   2.08% |###
         11 |   4.17% |######
         13 |   1.04% |#
         14 |   2.08% |###
         16 |   0.52% |
         17 |   1.04% |#
         19 |   0.26% |
         20 |   0.52% |
         21 |   0.78% |#

        ```

        Also be aware that without *max_depth* as a safety net, some substitutions
        are guaranteed to result in ``#!python RecursionError``s, even with very
        high *precision_limit*s.

        ``` python
        >>> H(1).substitute(
        ...   lambda h, outcome: H({outcome + 1: 1}),  # expands to a single-sided die
        ...   max_depth=-1,
        ...   precision_limit=Fraction(999999, 1000000),
        ... )
        Traceback (most recent call last):
          ...
        RecursionError: maximum recursion depth exceeded in comparison

        ```

        [``H.explode``][dyce.h.H.explode]’s *expand* implementation guards against
        this by returning ``#!python outcome`` if the passed histogram has only one
        face. Consider a similar approach for your own *expand* implementations if
        outcomes’ contextual probabilities do not asymptotically approach zero.

    [^1]:

        This method will “bottom out” far earlier. As of this writing, the practical
        limit of its implementation (without optimization) is something close to
        $\frac {1} {3} \times \left( limit - depth \right)$, where $limit$ is
        ``#!python sys.getrecursionlimit()`` and $depth$ is ``#!python
        len(inspect.stack(0))``. This also assumes the provided implementations for
        *expand* and *coalesce* don’t contribute significantly to the call stack.
        Setting *max_depth* to ``#!python -1`` or one beyond the absolute limit
        signals that the caller wants it out of the way.
    """
    max_depth = as_int(max_depth)

    if max_depth == -1:
        max_depth = sys.getrecursionlimit() + 1

    if max_depth < 0:
        raise ValueError(
            "max_depth cannot be an arbitrary negative number (use -1 explicitly to indicate no limit)"
        )

    if precision_limit < 0 or precision_limit > 1:
        raise ValueError(
            f"precision_limit ({precision_limit}) must be between zero and one, inclusive"
        )

    def _substitute(
        h: H,
        depth: int = 0,
        contextual_precision: Fraction = Fraction(1),
    ) -> H:
        assert coalesce is not None

        if depth == max_depth or contextual_precision <= precision_limit:
            return h

        def _expand_and_coalesce() -> Iterator[Tuple[Union[H, RealLike], int]]:
            total = h.total

            for outcome, count in h.items():
                expanded = expand(h, outcome)

                if isinstance(expanded, H):
                    # Keep expanding deeper, if we can
                    expanded_precision = Fraction(
                        contextual_precision.numerator * count,
                        contextual_precision.denominator * total,
                    )
                    expanded = _substitute(expanded, depth + 1, expanded_precision)
                    # Coalesce the result
                    expanded = coalesce(expanded, outcome)

                yield expanded, count

        return aggregate_with_counts(_expand_and_coalesce(), type(self))

    return _substitute(self).lowest_terms()

umap(self, un_op: _UnaryOperatorT) -> H

Applies un_op to each outcome of the histogram.

1
2
3
>>> import operator
>>> H(6).umap(operator.__neg__)
H(-6)
1
2
>>> H(4).umap(lambda outcome: (-outcome) ** outcome)
H({-27: 1, -1: 1, 4: 1, 256: 1})
Source code in dyce/h.py
@beartype
def umap(self, un_op: _UnaryOperatorT) -> H:
    r"""
    Applies *un_op* to each outcome of the histogram.

    ``` python
    >>> import operator
    >>> H(6).umap(operator.__neg__)
    H(-6)

    ```

    ``` python
    >>> H(4).umap(lambda outcome: (-outcome) ** outcome)
    H({-27: 1, -1: 1, 4: 1, 256: 1})

    ```
    """
    h = type(self)((un_op(outcome), count) for outcome, count in self.items())

    if self._simple_init is not None:
        simple_init = un_op(self._simple_init)

        if isinstance(simple_init, SupportsInt):
            h_simple = type(self)(simple_init)

            if h_simple == h:
                return h_simple

    return h

values(self) -> ValuesView[int]

D.values() -> an object providing a view on D's values

Source code in dyce/h.py
@beartype
def values(self) -> ValuesView[int]:
    return self.counts()

variance(self, mu: Optional[RealLike] = None) -> RealLike

Returns the variance of the weighted outcomes. If provided, mu is used as the mean (to avoid duplicate computation).

Source code in dyce/h.py
@beartype
def variance(self, mu: Optional[RealLike] = None) -> RealLike:
    r"""
    Returns the variance of the weighted outcomes. If provided, *mu* is used as the mean
    (to avoid duplicate computation).
    """
    mu = mu if mu else self.mean()
    numerator: float
    denominator: float
    numerator = denominator = 0

    for outcome, count in self.items():
        numerator += (outcome - mu) ** 2 * count
        denominator += count

    return numerator / (denominator or 1)

vs(self, other: _OperandT) -> H

Compares the histogram with other. -1 represents where other is greater. 0 represents where they are equal. 1 represents where other is less.

Shorthand for self.within(0, 0, other).

1
2
3
4
>>> H(6).vs(H(4))
H({-1: 6, 0: 4, 1: 14})
>>> H(6).vs(H(4)) == H(6).within(0, 0, H(4))
True

See the within method.

Source code in dyce/h.py
@beartype
def vs(self, other: _OperandT) -> H:
    r"""
    Compares the histogram with *other*. -1 represents where *other* is greater. 0
    represents where they are equal. 1 represents where *other* is less.

    Shorthand for ``#!python self.within(0, 0, other)``.

    ``` python
    >>> H(6).vs(H(4))
    H({-1: 6, 0: 4, 1: 14})
    >>> H(6).vs(H(4)) == H(6).within(0, 0, H(4))
    True

    ```

    See the [``within`` method][dyce.h.H.within].
    """
    return self.within(0, 0, other)

within(self, lo: RealLike, hi: RealLike, other: _OperandT = 0) -> H

Computes the difference between the histogram and other. -1 represents where that difference is less than lo. 0 represents where that difference between lo and hi (inclusive). 1 represents where that difference is greater than hi.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
>>> d6_2 = 2@H(6)
>>> d6_2.within(7, 9)
H({-1: 15, 0: 15, 1: 6})
>>> print(d6_2.within(7, 9).format())
avg |   -0.25
std |    0.72
var |    0.52
 -1 |  41.67% |####################
  0 |  41.67% |####################
  1 |  16.67% |########
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
>>> d6_3, d8_2 = 3@H(6), 2@H(8)
>>> d6_3.within(-1, 1, d8_2)  # 3d6 w/in 1 of 2d8
H({-1: 3500, 0: 3412, 1: 6912})
>>> print(d6_3.within(-1, 1, d8_2).format())
avg |    0.25
std |    0.83
var |    0.69
 -1 |  25.32% |############
  0 |  24.68% |############
  1 |  50.00% |#########################
Source code in dyce/h.py
@beartype
def within(self, lo: RealLike, hi: RealLike, other: _OperandT = 0) -> H:
    r"""
    Computes the difference between the histogram and *other*. -1 represents where that
    difference is less than *lo*. 0 represents where that difference between *lo*
    and *hi* (inclusive). 1 represents where that difference is greater than *hi*.

    ``` python
    >>> d6_2 = 2@H(6)
    >>> d6_2.within(7, 9)
    H({-1: 15, 0: 15, 1: 6})
    >>> print(d6_2.within(7, 9).format())
    avg |   -0.25
    std |    0.72
    var |    0.52
     -1 |  41.67% |####################
      0 |  41.67% |####################
      1 |  16.67% |########

    ```

    ``` python
    >>> d6_3, d8_2 = 3@H(6), 2@H(8)
    >>> d6_3.within(-1, 1, d8_2)  # 3d6 w/in 1 of 2d8
    H({-1: 3500, 0: 3412, 1: 6912})
    >>> print(d6_3.within(-1, 1, d8_2).format())
    avg |    0.25
    std |    0.83
    var |    0.69
     -1 |  25.32% |############
      0 |  24.68% |############
      1 |  50.00% |#########################

    ```
    """
    return self.map(_within(lo, hi), other)

P (Sequence, Generic, HableOpsMixin)

An immutable pool (ordered sequence) supporting group operations for zero or more H objects (provided or created from the initializer’s args parameter).

1
2
3
>>> from dyce import P
>>> p_d6 = P(6) ; p_d6  # shorthand for P(H(6))
P(6)
1
2
3
4
5
6
>>> P(p_d6, p_d6)  # 2d6
P(6, 6)
>>> 2@p_d6  # also 2d6
P(6, 6)
>>> 2@(2@p_d6) == 4@p_d6
True
1
2
3
4
>>> p = P(4, P(6, P(8, P(10, P(12, P(20)))))) ; p
P(4, 6, 8, 10, 12, 20)
>>> sum(p.roll()) in p.h()
True

This class implements the HableT protocol and derives from the HableOpsMixin class, which means it can be “flattened” into a single histogram, either explicitly via the h method, or implicitly by using arithmetic operations.

1
2
>>> -p_d6
H({-6: 1, -5: 1, -4: 1, -3: 1, -2: 1, -1: 1})
1
2
>>> p_d6 + p_d6
H({2: 1, 3: 2, 4: 3, 5: 4, 6: 5, 7: 6, 8: 5, 9: 4, 10: 3, 11: 2, 12: 1})
1
2
>>> 2 * P(8) - 1
H({1: 1, 3: 1, 5: 1, 7: 1, 9: 1, 11: 1, 13: 1, 15: 1})

To perform arithmetic on individual H objects in a pool without flattening, use the map, rmap, and umap methods.

1
2
3
>>> import operator
>>> P(4, 6, 8).umap(operator.__neg__)
P(-8, -6, -4)
1
2
>>> P(4, 6).map(operator.__pow__, 2)
P(H({1: 1, 4: 1, 9: 1, 16: 1}), H({1: 1, 4: 1, 9: 1, 16: 1, 25: 1, 36: 1}))
1
2
>>> P(4, 6).rmap(2, operator.__pow__)
P(H({2: 1, 4: 1, 8: 1, 16: 1}), H({2: 1, 4: 1, 8: 1, 16: 1, 32: 1, 64: 1}))

Comparisons with H objects work as expected.

1
2
3
>>> from dyce import H
>>> 3@p_d6 == H(6) + H(6) + H(6)
True

Indexing selects a contained histogram.

1
2
>>> P(4, 6, 8)[0]
H(4)

Note that pools are opinionated about ordering.

1
2
3
4
>>> P(8, 6, 4)
P(4, 6, 8)
>>> P(8, 6, 4)[0] == P(8, 4, 6)[0] == H(4)
True

In an extension to (departure from) the HableT protocol, the h method’s implementation also affords subsets of outcomes to be “taken” (selected) by passing in selection criteria. Values are indexed from least to greatest. Negative indexes are supported and retain their idiomatic meaning. Modeling the sum of the greatest two faces of three six-sided dice (3d6) can be expressed as:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
>>> p_3d6 = 3@p_d6
>>> p_3d6.h(-2, -1)
H({2: 1, 3: 3, 4: 7, 5: 12, 6: 19, 7: 27, 8: 34, 9: 36, 10: 34, 11: 27, 12: 16})
>>> print(p_3d6.h(-2, -1).format())
avg |    8.46
std |    2.21
var |    4.91
  2 |   0.46% |
  3 |   1.39% |
  4 |   3.24% |#
  5 |   5.56% |##
  6 |   8.80% |####
  7 |  12.50% |######
  8 |  15.74% |#######
  9 |  16.67% |########
 10 |  15.74% |#######
 11 |  12.50% |######
 12 |   7.41% |###
Source code in dyce/p.py
class P(Sequence[H], HableOpsMixin):
    r"""
    An immutable pool (ordered sequence) supporting group operations for zero or more
    [``H`` objects][dyce.h.H] (provided or created from the
    [initializer][dyce.p.P.__init__]’s *args* parameter).

    ``` python
    >>> from dyce import P
    >>> p_d6 = P(6) ; p_d6  # shorthand for P(H(6))
    P(6)

    ```

    ``` python
    >>> P(p_d6, p_d6)  # 2d6
    P(6, 6)
    >>> 2@p_d6  # also 2d6
    P(6, 6)
    >>> 2@(2@p_d6) == 4@p_d6
    True

    ```

    ``` python
    >>> p = P(4, P(6, P(8, P(10, P(12, P(20)))))) ; p
    P(4, 6, 8, 10, 12, 20)
    >>> sum(p.roll()) in p.h()
    True

    ```

    This class implements the [``HableT`` protocol][dyce.h.HableT] and derives from the
    [``HableOpsMixin`` class][dyce.h.HableOpsMixin], which means it can be
    “flattened” into a single histogram, either explicitly via the
    [``h`` method][dyce.p.P.h], or implicitly by using arithmetic operations.

    ``` python
    >>> -p_d6
    H({-6: 1, -5: 1, -4: 1, -3: 1, -2: 1, -1: 1})

    ```

    ``` python
    >>> p_d6 + p_d6
    H({2: 1, 3: 2, 4: 3, 5: 4, 6: 5, 7: 6, 8: 5, 9: 4, 10: 3, 11: 2, 12: 1})

    ```

    ``` python
    >>> 2 * P(8) - 1
    H({1: 1, 3: 1, 5: 1, 7: 1, 9: 1, 11: 1, 13: 1, 15: 1})

    ```

    To perform arithmetic on individual [``H`` objects][dyce.h.H] in a pool without
    flattening, use the [``map``][dyce.p.P.map], [``rmap``][dyce.p.P.rmap], and
    [``umap``][dyce.p.P.umap] methods.

    ``` python
    >>> import operator
    >>> P(4, 6, 8).umap(operator.__neg__)
    P(-8, -6, -4)

    ```

    ``` python
    >>> P(4, 6).map(operator.__pow__, 2)
    P(H({1: 1, 4: 1, 9: 1, 16: 1}), H({1: 1, 4: 1, 9: 1, 16: 1, 25: 1, 36: 1}))

    ```

    ``` python
    >>> P(4, 6).rmap(2, operator.__pow__)
    P(H({2: 1, 4: 1, 8: 1, 16: 1}), H({2: 1, 4: 1, 8: 1, 16: 1, 32: 1, 64: 1}))

    ```

    Comparisons with [``H`` objects][dyce.h.H] work as expected.

    ``` python
    >>> from dyce import H
    >>> 3@p_d6 == H(6) + H(6) + H(6)
    True

    ```

    Indexing selects a contained histogram.

    ``` python
    >>> P(4, 6, 8)[0]
    H(4)

    ```

    Note that pools are opinionated about ordering.

    ``` python
    >>> P(8, 6, 4)
    P(4, 6, 8)
    >>> P(8, 6, 4)[0] == P(8, 4, 6)[0] == H(4)
    True

    ```

    In an extension to (departure from) the [``HableT`` protocol][dyce.h.HableT], the
    [``h`` method][dyce.p.P.h]’s implementation also affords subsets of outcomes to be
    “taken” (selected) by passing in selection criteria. Values are indexed from least
    to greatest. Negative indexes are supported and retain their idiomatic meaning.
    Modeling the sum of the greatest two faces of three six-sided dice (``3d6``) can be
    expressed as:

    ``` python
    >>> p_3d6 = 3@p_d6
    >>> p_3d6.h(-2, -1)
    H({2: 1, 3: 3, 4: 7, 5: 12, 6: 19, 7: 27, 8: 34, 9: 36, 10: 34, 11: 27, 12: 16})
    >>> print(p_3d6.h(-2, -1).format())
    avg |    8.46
    std |    2.21
    var |    4.91
      2 |   0.46% |
      3 |   1.39% |
      4 |   3.24% |#
      5 |   5.56% |##
      6 |   8.80% |####
      7 |  12.50% |######
      8 |  15.74% |#######
      9 |  16.67% |########
     10 |  15.74% |#######
     11 |  12.50% |######
     12 |   7.41% |###

    ```
    """
    __slots__: Union[str, Iterable[str]] = ("_hs",)

    # ---- Initializer -----------------------------------------------------------------

    @beartype
    def __init__(self, *args: Union[SupportsInt, "P", H]) -> None:
        r"Initializer."
        super().__init__()

        def _gen_hs() -> Iterator[H]:
            for a in args:
                if isinstance(a, H):
                    yield a
                elif isinstance(a, P):
                    for h in a._hs:
                        yield h
                elif isinstance(a, SupportsInt):
                    yield H(a)
                else:
                    raise ValueError(f"unrecognized initializer {args}")

        hs = list(h for h in _gen_hs() if h)

        try:
            hs.sort(key=lambda h: tuple(h.items()))
        except TypeError:
            # This is for outcomes that don't support direct comparisons, like symbolic
            # representations
            hs.sort(key=lambda h: str(tuple(h.items())))

        self._hs = tuple(hs)

    # ---- Overrides -------------------------------------------------------------------

    @beartype
    def __repr__(self) -> str:
        def _parts() -> Iterator[str]:
            for h in self:
                yield (str(h._simple_init) if h._simple_init is not None else repr(h))

        args = ", ".join(_parts())

        return f"{type(self).__name__}({args})"

    @beartype
    def __eq__(self, other) -> bool:
        if isinstance(other, P):
            return __eq__(self._hs, other._hs)
        else:
            return NotImplemented

    @beartype
    def __ne__(self, other) -> bool:
        if isinstance(other, P):
            return __ne__(self._hs, other._hs)
        else:
            return NotImplemented

    @beartype
    def __len__(self) -> int:
        return len(self._hs)

    @overload
    def __getitem__(self, key: SupportsIndex) -> H:
        ...

    @overload
    def __getitem__(self, key: slice) -> P:
        ...

    @beartype
    # TODO(posita): See <https://github.com/python/mypy/issues/8393>
    # TODO(posita): See <https://github.com/beartype/beartype/issues/39#issuecomment-871914114> et seq.
    def __getitem__(self, key: _GetItemT) -> Union[H, "P"]:  # type: ignore [override]
        if isinstance(key, slice):
            return P(*self._hs[key])
        else:
            return self._hs[__index__(key)]

    @beartype
    def __iter__(self) -> Iterator[H]:
        return iter(self._hs)

    @beartype
    def __matmul__(self, other: SupportsInt) -> P:
        try:
            other = as_int(other)
        except TypeError:
            return NotImplemented

        if other < 0:
            raise ValueError("argument cannot be negative")
        else:
            return P(*chain.from_iterable(repeat(self, other)))

    @beartype
    def __rmatmul__(self, other: SupportsInt) -> P:
        return self.__matmul__(other)

    @beartype
    def h(self, *which: _GetItemT) -> H:
        r"""
        Roughly equivalent to ``#!python H((sum(roll), count) for roll, count in
        self.rolls_with_counts(*which))`` with some short-circuit optimizations.

        When provided no arguments, ``#!python h`` combines (or “flattens”) contained
        histograms in accordance with the [``HableT`` protocol][dyce.h.HableT].

        ``` python
        >>> (2@P(6)).h()
        H({2: 1, 3: 2, 4: 3, 5: 4, 6: 5, 7: 6, 8: 5, 9: 4, 10: 3, 11: 2, 12: 1})

        ```

        If one or more arguments are provided, this method sums subsets of outcomes
        those arguments identify for each roll. Outcomes are ordered from least (index
        ``#!python 0``) to greatest (index ``#!python -1`` or ``#!python len(self) -
        1``). Identifiers can be ``#!python int``s or ``#!python slice``s, and can be
        mixed.

        Taking the greatest of two six-sided dice can be modeled as:

        ``` python
        >>> p_2d6 = 2@P(6)
        >>> p_2d6.h(-1)
        H({1: 1, 2: 3, 3: 5, 4: 7, 5: 9, 6: 11})
        >>> print(p_2d6.h(-1).format())
        avg |    4.47
        std |    1.40
        var |    1.97
          1 |   2.78% |#
          2 |   8.33% |####
          3 |  13.89% |######
          4 |  19.44% |#########
          5 |  25.00% |############
          6 |  30.56% |###############

        ```

        Taking the greatest two and least two faces of ten four-sided dice (``10d4``)
        can be modeled as:

        ``` python
        >>> p_10d4 = 10@P(4)
        >>> p_10d4.h(slice(2), slice(-2, None))
        H({4: 1, 5: 10, 6: 1012, 7: 5030, 8: 51973, 9: 168760, 10: 595004, 11: 168760, 12: 51973, 13: 5030, 14: 1012, 15: 10, 16: 1})
        >>> print(p_10d4.h(slice(2), slice(-2, None)).format(scaled=True))
        avg |   10.00
        std |    0.91
        var |    0.84
          4 |   0.00% |
          5 |   0.00% |
          6 |   0.10% |
          7 |   0.48% |
          8 |   4.96% |####
          9 |  16.09% |##############
         10 |  56.74% |##################################################
         11 |  16.09% |##############
         12 |   4.96% |####
         13 |   0.48% |
         14 |   0.10% |
         15 |   0.00% |
         16 |   0.00% |

        ```

        Taking all outcomes exactly once is equivalent to summing the histograms in the
        pool.

        ``` python
        >>> d6 = H(6)
        >>> d6avg = H((2, 3, 3, 4, 4, 5))
        >>> p = 2@P(d6, d6avg)
        >>> p.h(slice(None)) == p.h() == d6 + d6 + d6avg + d6avg
        True

        ```
        """
        if which:
            n = len(self)
            i = _analyze_selection(n, which)

            if i and i >= n:
                # The caller selected all dice in the pool exactly i // n times, so we
                # can short-circuit roll enumeration
                assert i % n == 0

                return self.h() * (i // n)
            else:
                return H(
                    (sum(roll), count) for roll, count in self.rolls_with_counts(*which)
                )
        else:
            # The caller offered no selection
            return sum_h(self)

    # ---- Properties ------------------------------------------------------------------

    @property
    def is_homogeneous(self) -> bool:
        r"""
        !!! warning "Experimental"

            This property should be considered experimental and may change or disappear
            in future versions.

        A flag indicating whether the pool’s population of histograms is homogeneous.

        ``` python
        >>> P(6, 6).is_homogeneous
        True
        >>> P(4, 6, 8).is_homogeneous
        False

        ```
        """
        return len(set(self._hs)) <= 1

    # ---- Methods ---------------------------------------------------------------------

    @classmethod
    @beartype
    def foreach(
        cls,
        dependent_term: Callable[..., Union[H, RealLike]],
        **independent_sources: Union["P", H, HableT, _SourceT],
    ) -> H:
        r"""
        !!! warning "Experimental"

            This method should be considered experimental and may change or disappear in
            future versions.

        Calls ``#!python dependent_term`` for each unique set of rolls from the product
        of ``independent_sources`` and accumulates the results. This is useful for
        resolving dependent probabilities. Rolls are sorted least to greatest. Returned
        histograms are always reduced to their lowest terms.

        ``` python
        >>> from dyce.p import RollT

        >>> def three_way_vs(first: RollT, second: RollT, third: RollT):
        ...   first_reversed = first[::-1]
        ...   second_reversed = second[::-1]
        ...   third_reversed = third[::-1]
        ...   if first_reversed > second_reversed and first_reversed > third_reversed:
        ...     return 1  # first is the clear winner
        ...   elif second_reversed > first_reversed and second_reversed > third_reversed:
        ...     return 2  # second is the clear winner
        ...   elif third_reversed > first_reversed and third_reversed > second_reversed:
        ...     return 3  # third is the clear winner
        ...   else:
        ...     return 0  # there was a tie somewhere

        >>> P.foreach(
        ...   three_way_vs,
        ...   first=P(6, 6),  # first has pool of two d6s
        ...   second=P(6, 6),  # second has pool of two d6s
        ...   third=P(4, 8),  # third has pool of one d4 and one d8
        ... )
        H({0: 1103, 1: 5783, 2: 5783, 3: 8067})

        ```

        When all of ``#!python foreach``’s arguments are [``P`` objects][dyce.p.P] of
        size 1 or anything other than a ``P`` object, this function behaves similarly to
        [``H.foreach``][dyce.h.H] (although the signature of the *dependent_term*
        callback function differs slightly between the two interfaces).

        ``` python
        >>> from itertools import chain
        >>> P.foreach(
        ...   lambda **kw: sum(chain(*kw.values())),  # receives single-element rolls
        ...   src1=P(6),  # pool of size 1
        ...   src2=H(6),  # histogram
        ...   src3=range(6, 0, -1),  # histogram source
        ... ) == H.foreach(
        ...   lambda **kw: sum(kw.values()),  # receives outcomes
        ...   src1=P(6).h(),  # histogram
        ...   src2=H(6),  # histogram
        ...   src3={1, 2, 3, 4, 5, 6},  # histogram source
        ... )
        True

        ```

        The ``#!python foreach`` class method is equivalent to nesting loops iterating
        over [``P.rolls_with_counts``][dyce.p.P.rolls_with_counts] for each independent
        term and then aggregating the results.

        ``` python
        >>> def dependent_term(
        ...   *,
        ...   roll_1,
        ...   roll_2,
        ...   # ...
        ...   roll_n,
        ... ):
        ...   return (
        ...     (roll_2[-1] > roll_1[-1])
        ...     + (roll_n[-1] > roll_2[-1])
        ...     # ...
        ...   )

        >>> source_1 = P(8)
        >>> source_2 = P(6, 6)
        >>> # ...
        >>> source_n = P(4, 4, 4)

        >>> h = P.foreach(
        ...   dependent_term,
        ...   roll_1=source_1,
        ...   roll_2=source_2,
        ...   # ...
        ...   roll_n=source_n,
        ... ) ; h
        H({0: 3821, 1: 5126, 2: 269})

        >>> def resolve():
        ...   for roll_1, count_1 in source_1.rolls_with_counts():
        ...     for roll_2, count_2 in source_2.rolls_with_counts():
        ...       # ...
        ...       for roll_n, count_n in source_n.rolls_with_counts():
        ...         # ...
        ...           yield dependent_term(
        ...             roll_1=roll_1,
        ...             roll_2=roll_2,
        ...             # ...
        ...             roll_n=roll_n,
        ...         ), (
        ...           count_1
        ...           * count_2
        ...           # * ...
        ...           * count_n
        ...         )

        >>> from dyce.h import aggregate_with_counts
        >>> aggregate_with_counts(resolve()) == h
        True

        ```
        """
        pools_by_kw: Dict[str, P] = {}

        for source_name, source in independent_sources.items():
            if isinstance(source, H):
                pools_by_kw[source_name] = P(source)
            elif isinstance(source, P):
                pools_by_kw[source_name] = source
            elif isinstance(source, HableT):
                pools_by_kw[source_name] = P(source.h())
            else:
                pools_by_kw[source_name] = P(H(source))

        def _kw_roll_count_tuples(
            pool_name: str,
        ) -> Iterator[Tuple[str, RollT, int]]:
            for roll, count in pools_by_kw[pool_name].rolls_with_counts():
                yield pool_name, roll, count

        def _resolve_dependent_term_for_rolls() -> Iterator[
            Tuple[Union[H, RealLike], int]
        ]:
            for kw_roll_count_tuples in product(
                *(_kw_roll_count_tuples(pool_name) for pool_name in pools_by_kw)
            ):
                combined_count = reduce(
                    __mul__, (count for _, _, count in kw_roll_count_tuples), 1
                )
                rolls_by_name = {name: roll for name, roll, _ in kw_roll_count_tuples}
                yield dependent_term(**rolls_by_name), combined_count

        return aggregate_with_counts(_resolve_dependent_term_for_rolls()).lowest_terms()

    @experimental
    @beartype
    def appearances_in_rolls(self, outcome: RealLike) -> H:
        r"""
        !!! warning "Experimental"

            This method should be considered experimental and may change or disappear in
            future versions. While it does provide a performance improvement over other
            techniques, it is not significant for most applications, and rarely
            justifies the corresponding reduction in readability.

        Returns a histogram where the outcomes (keys) are the number of times *outcome*
        appears, and the counts are the number of rolls where *outcome* appears
        precisely that number of times. Equivalent to ``#!python H((sum(1 for v in roll
        if v == outcome), count) for roll, count in self.rolls_with_counts())``, but
        much more efficient.

        ``` python
        >>> p_2d6 = P(6, 6)
        >>> list(p_2d6.rolls_with_counts())
        [((1, 1), 1), ((1, 2), 2), ((1, 3), 2), ((1, 4), 2), ((1, 5), 2), ((1, 6), 2), ...]
        >>> p_2d6.appearances_in_rolls(1)
        H({0: 25, 1: 10, 2: 1})

        ```

        ``` python
        >>> # Least efficient, by far
        >>> d4, d6 = H(4), H(6)
        >>> p_3d4_2d6 = P(d4, d4, d4, d6, d6)
        >>> H((sum(1 for v in roll if v == 3), count) for roll, count in p_3d4_2d6.rolls_with_counts())
        H({0: 675, 1: 945, 2: 522, 3: 142, 4: 19, 5: 1})

        ```

        ``` python
        >>> # Pretty darned efficient, generalizable to other boolean inquiries, and
        >>> # arguably the most readable
        >>> d4_eq3, d6_eq3 = d4.eq(2), d6.eq(2)
        >>> 3@d4_eq3 + 2@d6_eq3
        H({0: 675, 1: 945, 2: 522, 3: 142, 4: 19, 5: 1})

        ```

        ``` python
        >>> # Most efficient for large sets of dice
        >>> p_3d4_2d6.appearances_in_rolls(3)
        H({0: 675, 1: 945, 2: 522, 3: 142, 4: 19, 5: 1})

        ```

        Based on some rudimentary testing, this method appears to converge on being
        about twice as fast as the boolean accumulation technique for larger sets.

        ``` python
        --8<-- "docs/assets/perf_appearances_in_rolls.txt"
        ```

        <details>
        <summary>Source: <a href="https://github.com/posita/dyce/blob/latest/docs/assets/perf_appearances_in_rolls.ipy"><code>perf_appearances_in_rolls.ipy</code></a></summary>

        ``` python
        --8<-- "docs/assets/perf_appearances_in_rolls.ipy"
        ```
        </details>
        """
        group_counters: List[Counter[RealLike]] = []

        for h, hs in groupby(self):
            group_counter: Counter[RealLike] = counter()
            n = sum(1 for _ in hs)

            for k in range(0, n + 1):
                group_counter[k] = h.exactly_k_times_in_n(outcome, n, k) * (
                    group_counter[k] if group_counter[k] else 1
                )

            group_counters.append(group_counter)

        return sum_h(H(group_counter) for group_counter in group_counters)

    @beartype
    def roll(self) -> RollT:
        r"""
        Returns (weighted) random outcomes from contained histograms.

        !!! note "On ordering"

            This method “works” (i.e., falls back to a “natural” ordering of string
            representations) for outcomes whose relative values cannot be known (e.g.,
            symbolic expressions). This is deliberate to allow random roll functionality
            where symbolic resolution is not needed or will happen later.
        """
        return tuple(sorted_outcomes(h.roll() for h in self))

    @beartype
    def rolls_with_counts(self, *which: _GetItemT) -> Iterator[_RollCountT]:
        r"""
        Returns an iterator yielding two-tuples (pairs) that, collectively, enumerate all
        possible outcomes for the pool. The first item in the two-tuple is a sorted
        sequence of the outcomes for a distinct roll. The second is the count for that
        roll. Outcomes in each roll are ordered least to greatest.

        If one or more arguments are provided, this methods selects subsets of outcomes
        for each roll. Outcomes in each roll are ordered from least (index ``#!python
        0``) to greatest (index ``#!python -1`` or ``#!python len(self) - 1``).
        Identifiers can be ``#!python int``s or ``#!python slice``s, and can be mixed
        for more flexible selections.

        ``` python
        >>> from collections import Counter

        >>> def accumulate_roll_counts(counter, roll_counts):
        ...   for roll, count in roll_counts:
        ...     counter[roll] += count
        ...   return counter

        >>> p_6d6 = 6@P(6)
        >>> every_other_d6 = accumulate_roll_counts(Counter(), p_6d6.rolls_with_counts(slice(None, None, -2))) ; every_other_d6
        Counter({(6, 4, 2): 4110, (6, 5, 3): 3390, (6, 4, 3): 3330, ..., (3, 3, 3): 13, (2, 2, 2): 7, (1, 1, 1): 1})
        >>> accumulate_roll_counts(Counter(), p_6d6.rolls_with_counts(5, 3, 1)) == every_other_d6
        True
        >>> accumulate_roll_counts(Counter(), p_6d6.rolls_with_counts(*range(5, 0, -2))) == every_other_d6
        True
        >>> accumulate_roll_counts(Counter(), p_6d6.rolls_with_counts(*(i for i in range(6, 0, -1) if i % 2 == 1))) == every_other_d6
        True

        ```

        One way to model the likelihood of achieving a “Yhatzee” (i.e., where five
        six-sided dice show the same face) on a single roll by checking rolls for where
        the least and greatest outcomes are the same.

        ``` python
        >>> p_5d6 = 5@P(6)
        >>> yhatzee_on_single_roll = H(
        ...   (1 if roll[0] == roll[-1] else 0, count)
        ...   for roll, count
        ...   in p_5d6.rolls_with_counts()
        ... )
        >>> print(yhatzee_on_single_roll.format(width=0))
        {..., 0: 99.92%, 1:  0.08%}

        ```

        !!! note "In the general case, rolls may appear more than once."

        ``` python
        >>> list(P(H(2), H(3)).rolls_with_counts())
        [((1, 1), 1), ((1, 2), 1), ((1, 3), 1), ((1, 2), 1), ((2, 2), 1), ((2, 3), 1)]

        ```

        In the above, ``#!python (1, 2)`` appears a total of two times, each with counts
        of one.

        However, if the pool is homogeneous (meaning it only contains identical
        histograms), rolls (before selection) are not repeated. (See the note on
        implementation below.)

        ``` python
        >>> list((2@P(H((-1, 0, 1)))).rolls_with_counts())
        [((-1, -1), 1), ((-1, 0), 2), ((-1, 1), 2), ((0, 0), 1), ((0, 1), 2), ((1, 1), 1)]

        ```

        Either way, by summing and counting all rolls, we can confirm identity.

        ``` python
        >>> d6 = H(6)
        >>> d6avg = H((2, 3, 3, 4, 4, 5))
        >>> p = 2@P(d6, d6avg)
        >>> H((sum(roll), count) for roll, count in p.rolls_with_counts()) == p.h() == d6 + d6 + d6avg + d6avg
        True

        ```

        This method does not try to outsmart callers by (mis)interpreting selection
        arguments. It honors selection identifier order and any redundancy.

        ``` python
        >>> p_d3_d4 = P(H(3), H(4))
        >>> # Select the second, first, then second (again) elements
        >>> list(p_d3_d4.rolls_with_counts(-1, 0, 1))
        [((1, 1, 1), 1), ((2, 1, 2), 1), ((3, 1, 3), 1), ((4, 1, 4), 1), ..., ((3, 1, 3), 1), ((3, 2, 3), 1), ((3, 3, 3), 1), ((4, 3, 4), 1)]

        ```

        Selecting the same outcomes, but in a different order is not immediately
        comparable.

        ``` python
        >>> select_0_1 = list(p_d3_d4.rolls_with_counts(0, 1))
        >>> select_1_0 = list(p_d3_d4.rolls_with_counts(1, 0))
        >>> select_0_1 == select_1_0
        False

        ```

        Equivalence can be tested when selected outcomes are sorted.

        ``` python
        >>> sorted_0_1 = [(sorted(roll), count) for roll, count in select_0_1]
        >>> sorted_1_0 = [(sorted(roll), count) for roll, count in select_1_0]
        >>> sorted_0_1 == sorted_1_0
        True

        ```

        They can also be summed and counted which is equivalent to calling the
        [``h`` method][dyce.p.P.h] with identical selection arguments.

        ``` python
        >>> summed_0_1 = H((sum(roll), count) for roll, count in select_0_1)
        >>> summed_1_0 = H((sum(roll), count) for roll, count in select_1_0)
        >>> summed_0_1 == summed_1_0 == p_d3_d4.h(0, 1) == p_d3_d4.h(1, 0)
        True

        ```

        !!! info "About the implementation"

            Enumeration is substantially more efficient for homogeneous pools than
            heterogeneous ones, because we are able to avoid the expensive enumeration
            of the Cartesian product using several techniques.

            Taking $k$ outcomes, where $k$ selects fewer than all $n$ outcomes a
            homogeneous pool benefits from [Ilmari Karonen’s
            optimization](https://rpg.stackexchange.com/a/166663/71245), which appears
            to scale geometrically with $k$ times some factor of $n$ (e.g., $\log n$,
            but I haven’t bothered to figure that out yet), such that—in observed
            testing, at least—it is generally the fastest approach for $k < n$.

            Where $k = n$, we leverage the [*multinomial
            coefficient*](https://en.wikipedia.org/wiki/Permutation#Permutations_of_multisets),
            which appears to scale generally with $n$.

            $$
            {{n} \choose {{{k}_{1}},{{k}_{2}},\ldots,{{k}_{m}}}}
            = {\frac {{n}!} {{{k}_{1}}! {{k}_{2}}! \cdots {{k}_{m}}!}}
            $$

            We enumerate combinations with replacements, and then the compute the number
            of permutations with repetitions for each combination. Consider ``#!python
            n@P(H(m))``. Enumerating combinations with replacements would yield all
            unique rolls.

            ``#!python ((1, 1, …, 1), (1, 1, …, 2), …, (1, 1, …, m), …, (m - 1, m, …,
            m), (m, m, …, m))``

            To determine the count for a particular roll ``#!python (a, b, …, n)``, we
            compute the multinomial coefficient for that roll and multiply by the scalar
            ``#!python H(m)[a] * H(m)[b] * … * H(m)[n]``. (See
            [this](https://www.lucamoroni.it/the-dice-roll-sum-problem/) for an in-depth
            exploration of the topic.)

            Further, this implementation attempts to optimize heterogeneous pools by
            breaking them into homogeneous groups before computing the Cartesian product
            of those sub-results. This approach allows homogeneous pools to be ordered
            without duplicates, where heterogeneous ones offer no such guarantees.

            As expected, this optimization allows the performance of arbitrary selection
            from mixed pools to sit between that of purely homogeneous and purely
            heterogeneous ones. Note, however, that all three appear to scale
            geometrically in some way.

            ``` python
            --8<-- "docs/assets/perf_rolls_with_counts.txt"
            ```

            <details>
            <summary>Source: <a href="https://github.com/posita/dyce/blob/latest/docs/assets/perf_rolls_with_counts.ipy"><code>perf_rolls_with_counts.ipy</code></a></summary>

            ``` python
            --8<-- "docs/assets/perf_rolls_with_counts.ipy"
            ```
            </details>
        """
        n = len(self)

        if not which:
            i: Optional[int] = n
        else:
            i = _analyze_selection(n, which)

        if i == 0 or n == 0:
            rolls_with_counts_iter: Iterable[_RollCountT] = iter(())
        else:
            groups = tuple((h, sum(1 for _ in hs)) for h, hs in groupby(self))

            if len(groups) == 1:
                # Based on cursory performance analysis, calling the homogeneous
                # implementation directly provides about a 15% performance savings over
                # merely falling through to _rwc_heterogeneous_h_groups. Maybe
                # itertools.product adds significant overhead?
                h, hn = groups[0]
                assert hn == n

                # Still in search of a better (i.e., more efficient) way:
                # https://math.stackexchange.com/questions/4173084/probability-distribution-of-k-1-k-2-cdots-k-m-selections-of-arbitrary-posi
                if i and abs(i) < n:
                    rolls_with_counts_iter = (
                        _rwc_homogeneous_n_h_using_karonen_partial_selection(
                            h, n, i, fill=0
                        )
                    )
                else:
                    rolls_with_counts_iter = (
                        _rwc_homogeneous_n_h_using_multinomial_coefficient(h, n)
                    )
            else:
                rolls_with_counts_iter = _rwc_heterogeneous_h_groups(groups, i)

        for sorted_outcomes_for_roll, roll_count in rolls_with_counts_iter:
            if which:
                taken_outcomes = tuple(getitems(sorted_outcomes_for_roll, which))
            else:
                taken_outcomes = sorted_outcomes_for_roll

            yield taken_outcomes, roll_count

    @beartype
    def map(self, op: _BinaryOperatorT, right_operand: _OperandT) -> P:
        r"""
        Shorthand for ``#!python P(*(h.map(op, right_operand) for h in self))``. See the
        [``H.map`` method][dyce.h.H.map].

        ``` python
        >>> import operator
        >>> p_3d6 = 3@P(H((-3, -1, 2, 4)))
        >>> p_3d6.map(operator.__mul__, -1)
        P(H({-4: 1, -2: 1, 1: 1, 3: 1}), H({-4: 1, -2: 1, 1: 1, 3: 1}), H({-4: 1, -2: 1, 1: 1, 3: 1}))

        ```
        """
        return P(*(h.map(op, right_operand) for h in self))

    @beartype
    def rmap(self, left_operand: RealLike, op: _BinaryOperatorT) -> P:
        r"""
        Shorthand for ``#!python P(*(h.rmap(left_operand, op) for h in self))``. See the
        [``H.rmap`` method][dyce.h.H.rmap].

        ``` python
        >>> import operator
        >>> from fractions import Fraction
        >>> p_3d6 = 2@P(H((-3, -1, 2, 4)))
        >>> p_3d6.umap(Fraction).rmap(1, operator.__truediv__)
        P(H({Fraction(-1, 1): 1, Fraction(-1, 3): 1, Fraction(1, 4): 1, Fraction(1, 2): 1}), H({Fraction(-1, 1): 1, Fraction(-1, 3): 1, Fraction(1, 4): 1, Fraction(1, 2): 1}))

        ```
        """
        return P(*(h.rmap(left_operand, op) for h in self))

    @beartype
    def umap(self, op: _UnaryOperatorT) -> P:
        r"""
        Shorthand for ``#!python P(*(h.umap(op) for h in self))``. See the
        [``H.umap`` method][dyce.h.H.umap].

        ``` python
        >>> import operator
        >>> p_3d6 = 3@P(H((-3, -1, 2, 4)))
        >>> p_3d6.umap(operator.__neg__)
        P(H({-4: 1, -2: 1, 1: 1, 3: 1}), H({-4: 1, -2: 1, 1: 1, 3: 1}), H({-4: 1, -2: 1, 1: 1, 3: 1}))
        >>> p_3d6.umap(operator.__abs__)
        P(H({1: 1, 2: 1, 3: 1, 4: 1}), H({1: 1, 2: 1, 3: 1, 4: 1}), H({1: 1, 2: 1, 3: 1, 4: 1}))

        ```
        """
        return P(*(h.umap(op) for h in self))

__slots__: Union[str, Iterable[str]] special

is_homogeneous: bool property readonly

Experimental

This property should be considered experimental and may change or disappear in future versions.

A flag indicating whether the pool’s population of histograms is homogeneous.

1
2
3
4
>>> P(6, 6).is_homogeneous
True
>>> P(4, 6, 8).is_homogeneous
False

__eq__(self, other) -> bool special

Source code in dyce/p.py
@beartype
def __eq__(self, other) -> bool:
    if isinstance(other, P):
        return __eq__(self._hs, other._hs)
    else:
        return NotImplemented

__getitem__(self, key: _GetItemT) -> Union[H, 'P'] special

Source code in dyce/p.py
@beartype
# TODO(posita): See <https://github.com/python/mypy/issues/8393>
# TODO(posita): See <https://github.com/beartype/beartype/issues/39#issuecomment-871914114> et seq.
def __getitem__(self, key: _GetItemT) -> Union[H, "P"]:  # type: ignore [override]
    if isinstance(key, slice):
        return P(*self._hs[key])
    else:
        return self._hs[__index__(key)]

__init__(self, *args: Union[SupportsInt, 'P', H]) -> None special

Initializer.

Source code in dyce/p.py
@beartype
def __init__(self, *args: Union[SupportsInt, "P", H]) -> None:
    r"Initializer."
    super().__init__()

    def _gen_hs() -> Iterator[H]:
        for a in args:
            if isinstance(a, H):
                yield a
            elif isinstance(a, P):
                for h in a._hs:
                    yield h
            elif isinstance(a, SupportsInt):
                yield H(a)
            else:
                raise ValueError(f"unrecognized initializer {args}")

    hs = list(h for h in _gen_hs() if h)

    try:
        hs.sort(key=lambda h: tuple(h.items()))
    except TypeError:
        # This is for outcomes that don't support direct comparisons, like symbolic
        # representations
        hs.sort(key=lambda h: str(tuple(h.items())))

    self._hs = tuple(hs)

__iter__(self) -> Iterator[H] special

Source code in dyce/p.py
@beartype
def __iter__(self) -> Iterator[H]:
    return iter(self._hs)

__len__(self) -> int special

Source code in dyce/p.py
@beartype
def __len__(self) -> int:
    return len(self._hs)

__matmul__(self, other: SupportsInt) -> P special

Source code in dyce/p.py
@beartype
def __matmul__(self, other: SupportsInt) -> P:
    try:
        other = as_int(other)
    except TypeError:
        return NotImplemented

    if other < 0:
        raise ValueError("argument cannot be negative")
    else:
        return P(*chain.from_iterable(repeat(self, other)))

__ne__(self, other) -> bool special

Source code in dyce/p.py
@beartype
def __ne__(self, other) -> bool:
    if isinstance(other, P):
        return __ne__(self._hs, other._hs)
    else:
        return NotImplemented

__repr__(self) -> str special

Source code in dyce/p.py
@beartype
def __repr__(self) -> str:
    def _parts() -> Iterator[str]:
        for h in self:
            yield (str(h._simple_init) if h._simple_init is not None else repr(h))

    args = ", ".join(_parts())

    return f"{type(self).__name__}({args})"

__rmatmul__(self, other: SupportsInt) -> P special

Source code in dyce/p.py
@beartype
def __rmatmul__(self, other: SupportsInt) -> P:
    return self.__matmul__(other)

appearances_in_rolls(self, outcome: RealLike) -> H

Experimental

This method should be considered experimental and may change or disappear in future versions. While it does provide a performance improvement over other techniques, it is not significant for most applications, and rarely justifies the corresponding reduction in readability.

Returns a histogram where the outcomes (keys) are the number of times outcome appears, and the counts are the number of rolls where outcome appears precisely that number of times. Equivalent to H((sum(1 for v in rollif v == outcome), count) for roll, count in self.rolls_with_counts()), but much more efficient.

1
2
3
4
5
>>> p_2d6 = P(6, 6)
>>> list(p_2d6.rolls_with_counts())
[((1, 1), 1), ((1, 2), 2), ((1, 3), 2), ((1, 4), 2), ((1, 5), 2), ((1, 6), 2), ...]
>>> p_2d6.appearances_in_rolls(1)
H({0: 25, 1: 10, 2: 1})
1
2
3
4
5
>>> # Least efficient, by far
>>> d4, d6 = H(4), H(6)
>>> p_3d4_2d6 = P(d4, d4, d4, d6, d6)
>>> H((sum(1 for v in roll if v == 3), count) for roll, count in p_3d4_2d6.rolls_with_counts())
H({0: 675, 1: 945, 2: 522, 3: 142, 4: 19, 5: 1})
1
2
3
4
5
>>> # Pretty darned efficient, generalizable to other boolean inquiries, and
>>> # arguably the most readable
>>> d4_eq3, d6_eq3 = d4.eq(2), d6.eq(2)
>>> 3@d4_eq3 + 2@d6_eq3
H({0: 675, 1: 945, 2: 522, 3: 142, 4: 19, 5: 1})
1
2
3
>>> # Most efficient for large sets of dice
>>> p_3d4_2d6.appearances_in_rolls(3)
H({0: 675, 1: 945, 2: 522, 3: 142, 4: 19, 5: 1})

Based on some rudimentary testing, this method appears to converge on being about twice as fast as the boolean accumulation technique for larger sets.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
%timeit 3@d4_eq3 + 2@d6_eq3
397 µs ± 17.5 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)

%timeit P(3@P(4), 2@P(6)).appearances_in_rolls(3)
653 µs ± 23.9 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)

%timeit 9@d4_eq3 + 6@d6_eq3
1.39 ms ± 53.9 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)

%timeit P(9@P(4), 6@P(6)).appearances_in_rolls(3)
1.12 ms ± 35.3 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)

%timeit 27@d4_eq3 + 18@d6_eq3
7.14 ms ± 223 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

%timeit P(27@P(4), 18@P(6)).appearances_in_rolls(3)
3.32 ms ± 109 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

%timeit 81@d4_eq3 + 54@d6_eq3
46.7 ms ± 1.01 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)

%timeit P(81@P(4), 54@P(6)).appearances_in_rolls(3)
17.1 ms ± 416 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
Source: perf_appearances_in_rolls.ipy
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
from dyce import H, P

p_2d6 = P(6, 6)
d4, d6 = H(4), H(6)
p_3d4_2d6 = P(d4, d4, d4, d6, d6)
d4_eq3, d6_eq3 = d4.eq(2), d6.eq(2)

print(f"%timeit 3@d4_eq3 + 2@d6_eq3")
%timeit 3@d4_eq3 + 2@d6_eq3
print()

print(f"%timeit P(3@P(4), 2@P(6)).appearances_in_rolls(3)")
%timeit P(3@P(4), 2@P(6)).appearances_in_rolls(3)
print()

print(f"%timeit 9@d4_eq3 + 6@d6_eq3")
%timeit 9@d4_eq3 + 6@d6_eq3
print()

print(f"%timeit P(9@P(4), 6@P(6)).appearances_in_rolls(3)")
%timeit P(9@P(4), 6@P(6)).appearances_in_rolls(3)
print()

print(f"%timeit 27@d4_eq3 + 18@d6_eq3")
%timeit 27@d4_eq3 + 18@d6_eq3
print()

print(f"%timeit P(27@P(4), 18@P(6)).appearances_in_rolls(3)")
%timeit P(27@P(4), 18@P(6)).appearances_in_rolls(3)
print()

print(f"%timeit 81@d4_eq3 + 54@d6_eq3")
%timeit 81@d4_eq3 + 54@d6_eq3
print()

print(f"%timeit P(81@P(4), 54@P(6)).appearances_in_rolls(3)")
%timeit P(81@P(4), 54@P(6)).appearances_in_rolls(3)
print()
Source code in dyce/p.py
@experimental
@beartype
def appearances_in_rolls(self, outcome: RealLike) -> H:
    r"""
    !!! warning "Experimental"

        This method should be considered experimental and may change or disappear in
        future versions. While it does provide a performance improvement over other
        techniques, it is not significant for most applications, and rarely
        justifies the corresponding reduction in readability.

    Returns a histogram where the outcomes (keys) are the number of times *outcome*
    appears, and the counts are the number of rolls where *outcome* appears
    precisely that number of times. Equivalent to ``#!python H((sum(1 for v in roll
    if v == outcome), count) for roll, count in self.rolls_with_counts())``, but
    much more efficient.

    ``` python
    >>> p_2d6 = P(6, 6)
    >>> list(p_2d6.rolls_with_counts())
    [((1, 1), 1), ((1, 2), 2), ((1, 3), 2), ((1, 4), 2), ((1, 5), 2), ((1, 6), 2), ...]
    >>> p_2d6.appearances_in_rolls(1)
    H({0: 25, 1: 10, 2: 1})

    ```

    ``` python
    >>> # Least efficient, by far
    >>> d4, d6 = H(4), H(6)
    >>> p_3d4_2d6 = P(d4, d4, d4, d6, d6)
    >>> H((sum(1 for v in roll if v == 3), count) for roll, count in p_3d4_2d6.rolls_with_counts())
    H({0: 675, 1: 945, 2: 522, 3: 142, 4: 19, 5: 1})

    ```

    ``` python
    >>> # Pretty darned efficient, generalizable to other boolean inquiries, and
    >>> # arguably the most readable
    >>> d4_eq3, d6_eq3 = d4.eq(2), d6.eq(2)
    >>> 3@d4_eq3 + 2@d6_eq3
    H({0: 675, 1: 945, 2: 522, 3: 142, 4: 19, 5: 1})

    ```

    ``` python
    >>> # Most efficient for large sets of dice
    >>> p_3d4_2d6.appearances_in_rolls(3)
    H({0: 675, 1: 945, 2: 522, 3: 142, 4: 19, 5: 1})

    ```

    Based on some rudimentary testing, this method appears to converge on being
    about twice as fast as the boolean accumulation technique for larger sets.

    ``` python
    --8<-- "docs/assets/perf_appearances_in_rolls.txt"
    ```

    <details>
    <summary>Source: <a href="https://github.com/posita/dyce/blob/latest/docs/assets/perf_appearances_in_rolls.ipy"><code>perf_appearances_in_rolls.ipy</code></a></summary>

    ``` python
    --8<-- "docs/assets/perf_appearances_in_rolls.ipy"
    ```
    </details>
    """
    group_counters: List[Counter[RealLike]] = []

    for h, hs in groupby(self):
        group_counter: Counter[RealLike] = counter()
        n = sum(1 for _ in hs)

        for k in range(0, n + 1):
            group_counter[k] = h.exactly_k_times_in_n(outcome, n, k) * (
                group_counter[k] if group_counter[k] else 1
            )

        group_counters.append(group_counter)

    return sum_h(H(group_counter) for group_counter in group_counters)

foreach(dependent_term: Callable[..., Union[H, RealLike]], **independent_sources: Union['P', H, HableT, _SourceT]) -> H classmethod

Experimental

This method should be considered experimental and may change or disappear in future versions.

Calls dependent_term for each unique set of rolls from the product of independent_sources and accumulates the results. This is useful for resolving dependent probabilities. Rolls are sorted least to greatest. Returned histograms are always reduced to their lowest terms.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
>>> from dyce.p import RollT

>>> def three_way_vs(first: RollT, second: RollT, third: RollT):
...   first_reversed = first[::-1]
...   second_reversed = second[::-1]
...   third_reversed = third[::-1]
...   if first_reversed > second_reversed and first_reversed > third_reversed:
...     return 1  # first is the clear winner
...   elif second_reversed > first_reversed and second_reversed > third_reversed:
...     return 2  # second is the clear winner
...   elif third_reversed > first_reversed and third_reversed > second_reversed:
...     return 3  # third is the clear winner
...   else:
...     return 0  # there was a tie somewhere

>>> P.foreach(
...   three_way_vs,
...   first=P(6, 6),  # first has pool of two d6s
...   second=P(6, 6),  # second has pool of two d6s
...   third=P(4, 8),  # third has pool of one d4 and one d8
... )
H({0: 1103, 1: 5783, 2: 5783, 3: 8067})

When all of foreach’s arguments are P objects of size 1 or anything other than a P object, this function behaves similarly to H.foreach (although the signature of the dependent_term callback function differs slightly between the two interfaces).

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
>>> from itertools import chain
>>> P.foreach(
...   lambda **kw: sum(chain(*kw.values())),  # receives single-element rolls
...   src1=P(6),  # pool of size 1
...   src2=H(6),  # histogram
...   src3=range(6, 0, -1),  # histogram source
... ) == H.foreach(
...   lambda **kw: sum(kw.values()),  # receives outcomes
...   src1=P(6).h(),  # histogram
...   src2=H(6),  # histogram
...   src3={1, 2, 3, 4, 5, 6},  # histogram source
... )
True

The foreach class method is equivalent to nesting loops iterating over P.rolls_with_counts for each independent term and then aggregating the results.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
>>> def dependent_term(
...   *,
...   roll_1,
...   roll_2,
...   # ...
...   roll_n,
... ):
...   return (
...     (roll_2[-1] > roll_1[-1])
...     + (roll_n[-1] > roll_2[-1])
...     # ...
...   )

>>> source_1 = P(8)
>>> source_2 = P(6, 6)
>>> # ...
>>> source_n = P(4, 4, 4)

>>> h = P.foreach(
...   dependent_term,
...   roll_1=source_1,
...   roll_2=source_2,
...   # ...
...   roll_n=source_n,
... ) ; h
H({0: 3821, 1: 5126, 2: 269})

>>> def resolve():
...   for roll_1, count_1 in source_1.rolls_with_counts():
...     for roll_2, count_2 in source_2.rolls_with_counts():
...       # ...
...       for roll_n, count_n in source_n.rolls_with_counts():
...         # ...
...           yield dependent_term(
...             roll_1=roll_1,
...             roll_2=roll_2,
...             # ...
...             roll_n=roll_n,
...         ), (
...           count_1
...           * count_2
...           # * ...
...           * count_n
...         )

>>> from dyce.h import aggregate_with_counts
>>> aggregate_with_counts(resolve()) == h
True
Source code in dyce/p.py
@classmethod
@beartype
def foreach(
    cls,
    dependent_term: Callable[..., Union[H, RealLike]],
    **independent_sources: Union["P", H, HableT, _SourceT],
) -> H:
    r"""
    !!! warning "Experimental"

        This method should be considered experimental and may change or disappear in
        future versions.

    Calls ``#!python dependent_term`` for each unique set of rolls from the product
    of ``independent_sources`` and accumulates the results. This is useful for
    resolving dependent probabilities. Rolls are sorted least to greatest. Returned
    histograms are always reduced to their lowest terms.

    ``` python
    >>> from dyce.p import RollT

    >>> def three_way_vs(first: RollT, second: RollT, third: RollT):
    ...   first_reversed = first[::-1]
    ...   second_reversed = second[::-1]
    ...   third_reversed = third[::-1]
    ...   if first_reversed > second_reversed and first_reversed > third_reversed:
    ...     return 1  # first is the clear winner
    ...   elif second_reversed > first_reversed and second_reversed > third_reversed:
    ...     return 2  # second is the clear winner
    ...   elif third_reversed > first_reversed and third_reversed > second_reversed:
    ...     return 3  # third is the clear winner
    ...   else:
    ...     return 0  # there was a tie somewhere

    >>> P.foreach(
    ...   three_way_vs,
    ...   first=P(6, 6),  # first has pool of two d6s
    ...   second=P(6, 6),  # second has pool of two d6s
    ...   third=P(4, 8),  # third has pool of one d4 and one d8
    ... )
    H({0: 1103, 1: 5783, 2: 5783, 3: 8067})

    ```

    When all of ``#!python foreach``’s arguments are [``P`` objects][dyce.p.P] of
    size 1 or anything other than a ``P`` object, this function behaves similarly to
    [``H.foreach``][dyce.h.H] (although the signature of the *dependent_term*
    callback function differs slightly between the two interfaces).

    ``` python
    >>> from itertools import chain
    >>> P.foreach(
    ...   lambda **kw: sum(chain(*kw.values())),  # receives single-element rolls
    ...   src1=P(6),  # pool of size 1
    ...   src2=H(6),  # histogram
    ...   src3=range(6, 0, -1),  # histogram source
    ... ) == H.foreach(
    ...   lambda **kw: sum(kw.values()),  # receives outcomes
    ...   src1=P(6).h(),  # histogram
    ...   src2=H(6),  # histogram
    ...   src3={1, 2, 3, 4, 5, 6},  # histogram source
    ... )
    True

    ```

    The ``#!python foreach`` class method is equivalent to nesting loops iterating
    over [``P.rolls_with_counts``][dyce.p.P.rolls_with_counts] for each independent
    term and then aggregating the results.

    ``` python
    >>> def dependent_term(
    ...   *,
    ...   roll_1,
    ...   roll_2,
    ...   # ...
    ...   roll_n,
    ... ):
    ...   return (
    ...     (roll_2[-1] > roll_1[-1])
    ...     + (roll_n[-1] > roll_2[-1])
    ...     # ...
    ...   )

    >>> source_1 = P(8)
    >>> source_2 = P(6, 6)
    >>> # ...
    >>> source_n = P(4, 4, 4)

    >>> h = P.foreach(
    ...   dependent_term,
    ...   roll_1=source_1,
    ...   roll_2=source_2,
    ...   # ...
    ...   roll_n=source_n,
    ... ) ; h
    H({0: 3821, 1: 5126, 2: 269})

    >>> def resolve():
    ...   for roll_1, count_1 in source_1.rolls_with_counts():
    ...     for roll_2, count_2 in source_2.rolls_with_counts():
    ...       # ...
    ...       for roll_n, count_n in source_n.rolls_with_counts():
    ...         # ...
    ...           yield dependent_term(
    ...             roll_1=roll_1,
    ...             roll_2=roll_2,
    ...             # ...
    ...             roll_n=roll_n,
    ...         ), (
    ...           count_1
    ...           * count_2
    ...           # * ...
    ...           * count_n
    ...         )

    >>> from dyce.h import aggregate_with_counts
    >>> aggregate_with_counts(resolve()) == h
    True

    ```
    """
    pools_by_kw: Dict[str, P] = {}

    for source_name, source in independent_sources.items():
        if isinstance(source, H):
            pools_by_kw[source_name] = P(source)
        elif isinstance(source, P):
            pools_by_kw[source_name] = source
        elif isinstance(source, HableT):
            pools_by_kw[source_name] = P(source.h())
        else:
            pools_by_kw[source_name] = P(H(source))

    def _kw_roll_count_tuples(
        pool_name: str,
    ) -> Iterator[Tuple[str, RollT, int]]:
        for roll, count in pools_by_kw[pool_name].rolls_with_counts():
            yield pool_name, roll, count

    def _resolve_dependent_term_for_rolls() -> Iterator[
        Tuple[Union[H, RealLike], int]
    ]:
        for kw_roll_count_tuples in product(
            *(_kw_roll_count_tuples(pool_name) for pool_name in pools_by_kw)
        ):
            combined_count = reduce(
                __mul__, (count for _, _, count in kw_roll_count_tuples), 1
            )
            rolls_by_name = {name: roll for name, roll, _ in kw_roll_count_tuples}
            yield dependent_term(**rolls_by_name), combined_count

    return aggregate_with_counts(_resolve_dependent_term_for_rolls()).lowest_terms()

h(self, *which: _GetItemT) -> H

Roughly equivalent to H((sum(roll), count) for roll, count inself.rolls_with_counts(*which)) with some short-circuit optimizations.

When provided no arguments, h combines (or “flattens”) contained histograms in accordance with the HableT protocol.

1
2
>>> (2@P(6)).h()
H({2: 1, 3: 2, 4: 3, 5: 4, 6: 5, 7: 6, 8: 5, 9: 4, 10: 3, 11: 2, 12: 1})

If one or more arguments are provided, this method sums subsets of outcomes those arguments identify for each roll. Outcomes are ordered from least (index 0) to greatest (index -1 or len(self) -1). Identifiers can be ints or slices, and can be mixed.

Taking the greatest of two six-sided dice can be modeled as:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
>>> p_2d6 = 2@P(6)
>>> p_2d6.h(-1)
H({1: 1, 2: 3, 3: 5, 4: 7, 5: 9, 6: 11})
>>> print(p_2d6.h(-1).format())
avg |    4.47
std |    1.40
var |    1.97
  1 |   2.78% |#
  2 |   8.33% |####
  3 |  13.89% |######
  4 |  19.44% |#########
  5 |  25.00% |############
  6 |  30.56% |###############

Taking the greatest two and least two faces of ten four-sided dice (10d4) can be modeled as:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
>>> p_10d4 = 10@P(4)
>>> p_10d4.h(slice(2), slice(-2, None))
H({4: 1, 5: 10, 6: 1012, 7: 5030, 8: 51973, 9: 168760, 10: 595004, 11: 168760, 12: 51973, 13: 5030, 14: 1012, 15: 10, 16: 1})
>>> print(p_10d4.h(slice(2), slice(-2, None)).format(scaled=True))
avg |   10.00
std |    0.91
var |    0.84
  4 |   0.00% |
  5 |   0.00% |
  6 |   0.10% |
  7 |   0.48% |
  8 |   4.96% |####
  9 |  16.09% |##############
 10 |  56.74% |##################################################
 11 |  16.09% |##############
 12 |   4.96% |####
 13 |   0.48% |
 14 |   0.10% |
 15 |   0.00% |
 16 |   0.00% |

Taking all outcomes exactly once is equivalent to summing the histograms in the pool.

1
2
3
4
5
>>> d6 = H(6)
>>> d6avg = H((2, 3, 3, 4, 4, 5))
>>> p = 2@P(d6, d6avg)
>>> p.h(slice(None)) == p.h() == d6 + d6 + d6avg + d6avg
True
Source code in dyce/p.py
@beartype
def h(self, *which: _GetItemT) -> H:
    r"""
    Roughly equivalent to ``#!python H((sum(roll), count) for roll, count in
    self.rolls_with_counts(*which))`` with some short-circuit optimizations.

    When provided no arguments, ``#!python h`` combines (or “flattens”) contained
    histograms in accordance with the [``HableT`` protocol][dyce.h.HableT].

    ``` python
    >>> (2@P(6)).h()
    H({2: 1, 3: 2, 4: 3, 5: 4, 6: 5, 7: 6, 8: 5, 9: 4, 10: 3, 11: 2, 12: 1})

    ```

    If one or more arguments are provided, this method sums subsets of outcomes
    those arguments identify for each roll. Outcomes are ordered from least (index
    ``#!python 0``) to greatest (index ``#!python -1`` or ``#!python len(self) -
    1``). Identifiers can be ``#!python int``s or ``#!python slice``s, and can be
    mixed.

    Taking the greatest of two six-sided dice can be modeled as:

    ``` python
    >>> p_2d6 = 2@P(6)
    >>> p_2d6.h(-1)
    H({1: 1, 2: 3, 3: 5, 4: 7, 5: 9, 6: 11})
    >>> print(p_2d6.h(-1).format())
    avg |    4.47
    std |    1.40
    var |    1.97
      1 |   2.78% |#
      2 |   8.33% |####
      3 |  13.89% |######
      4 |  19.44% |#########
      5 |  25.00% |############
      6 |  30.56% |###############

    ```

    Taking the greatest two and least two faces of ten four-sided dice (``10d4``)
    can be modeled as:

    ``` python
    >>> p_10d4 = 10@P(4)
    >>> p_10d4.h(slice(2), slice(-2, None))
    H({4: 1, 5: 10, 6: 1012, 7: 5030, 8: 51973, 9: 168760, 10: 595004, 11: 168760, 12: 51973, 13: 5030, 14: 1012, 15: 10, 16: 1})
    >>> print(p_10d4.h(slice(2), slice(-2, None)).format(scaled=True))
    avg |   10.00
    std |    0.91
    var |    0.84
      4 |   0.00% |
      5 |   0.00% |
      6 |   0.10% |
      7 |   0.48% |
      8 |   4.96% |####
      9 |  16.09% |##############
     10 |  56.74% |##################################################
     11 |  16.09% |##############
     12 |   4.96% |####
     13 |   0.48% |
     14 |   0.10% |
     15 |   0.00% |
     16 |   0.00% |

    ```

    Taking all outcomes exactly once is equivalent to summing the histograms in the
    pool.

    ``` python
    >>> d6 = H(6)
    >>> d6avg = H((2, 3, 3, 4, 4, 5))
    >>> p = 2@P(d6, d6avg)
    >>> p.h(slice(None)) == p.h() == d6 + d6 + d6avg + d6avg
    True

    ```
    """
    if which:
        n = len(self)
        i = _analyze_selection(n, which)

        if i and i >= n:
            # The caller selected all dice in the pool exactly i // n times, so we
            # can short-circuit roll enumeration
            assert i % n == 0

            return self.h() * (i // n)
        else:
            return H(
                (sum(roll), count) for roll, count in self.rolls_with_counts(*which)
            )
    else:
        # The caller offered no selection
        return sum_h(self)

map(self, op: _BinaryOperatorT, right_operand: _OperandT) -> P

Shorthand for P(*(h.map(op, right_operand) for h in self)). See the H.map method.

1
2
3
4
>>> import operator
>>> p_3d6 = 3@P(H((-3, -1, 2, 4)))
>>> p_3d6.map(operator.__mul__, -1)
P(H({-4: 1, -2: 1, 1: 1, 3: 1}), H({-4: 1, -2: 1, 1: 1, 3: 1}), H({-4: 1, -2: 1, 1: 1, 3: 1}))
Source code in dyce/p.py
@beartype
def map(self, op: _BinaryOperatorT, right_operand: _OperandT) -> P:
    r"""
    Shorthand for ``#!python P(*(h.map(op, right_operand) for h in self))``. See the
    [``H.map`` method][dyce.h.H.map].

    ``` python
    >>> import operator
    >>> p_3d6 = 3@P(H((-3, -1, 2, 4)))
    >>> p_3d6.map(operator.__mul__, -1)
    P(H({-4: 1, -2: 1, 1: 1, 3: 1}), H({-4: 1, -2: 1, 1: 1, 3: 1}), H({-4: 1, -2: 1, 1: 1, 3: 1}))

    ```
    """
    return P(*(h.map(op, right_operand) for h in self))

rmap(self, left_operand: RealLike, op: _BinaryOperatorT) -> P

Shorthand for P(*(h.rmap(left_operand, op) for h in self)). See the H.rmap method.

1
2
3
4
5
>>> import operator
>>> from fractions import Fraction
>>> p_3d6 = 2@P(H((-3, -1, 2, 4)))
>>> p_3d6.umap(Fraction).rmap(1, operator.__truediv__)
P(H({Fraction(-1, 1): 1, Fraction(-1, 3): 1, Fraction(1, 4): 1, Fraction(1, 2): 1}), H({Fraction(-1, 1): 1, Fraction(-1, 3): 1, Fraction(1, 4): 1, Fraction(1, 2): 1}))
Source code in dyce/p.py
@beartype
def rmap(self, left_operand: RealLike, op: _BinaryOperatorT) -> P:
    r"""
    Shorthand for ``#!python P(*(h.rmap(left_operand, op) for h in self))``. See the
    [``H.rmap`` method][dyce.h.H.rmap].

    ``` python
    >>> import operator
    >>> from fractions import Fraction
    >>> p_3d6 = 2@P(H((-3, -1, 2, 4)))
    >>> p_3d6.umap(Fraction).rmap(1, operator.__truediv__)
    P(H({Fraction(-1, 1): 1, Fraction(-1, 3): 1, Fraction(1, 4): 1, Fraction(1, 2): 1}), H({Fraction(-1, 1): 1, Fraction(-1, 3): 1, Fraction(1, 4): 1, Fraction(1, 2): 1}))

    ```
    """
    return P(*(h.rmap(left_operand, op) for h in self))

roll(self) -> RollT

Returns (weighted) random outcomes from contained histograms.

On ordering

This method “works” (i.e., falls back to a “natural” ordering of string representations) for outcomes whose relative values cannot be known (e.g., symbolic expressions). This is deliberate to allow random roll functionality where symbolic resolution is not needed or will happen later.

Source code in dyce/p.py
@beartype
def roll(self) -> RollT:
    r"""
    Returns (weighted) random outcomes from contained histograms.

    !!! note "On ordering"

        This method “works” (i.e., falls back to a “natural” ordering of string
        representations) for outcomes whose relative values cannot be known (e.g.,
        symbolic expressions). This is deliberate to allow random roll functionality
        where symbolic resolution is not needed or will happen later.
    """
    return tuple(sorted_outcomes(h.roll() for h in self))

rolls_with_counts(self, *which: _GetItemT) -> Iterator[_RollCountT]

Returns an iterator yielding two-tuples (pairs) that, collectively, enumerate all possible outcomes for the pool. The first item in the two-tuple is a sorted sequence of the outcomes for a distinct roll. The second is the count for that roll. Outcomes in each roll are ordered least to greatest.

If one or more arguments are provided, this methods selects subsets of outcomes for each roll. Outcomes in each roll are ordered from least (index 0) to greatest (index -1 or len(self) - 1). Identifiers can be ints or slices, and can be mixed for more flexible selections.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
>>> from collections import Counter

>>> def accumulate_roll_counts(counter, roll_counts):
...   for roll, count in roll_counts:
...     counter[roll] += count
...   return counter

>>> p_6d6 = 6@P(6)
>>> every_other_d6 = accumulate_roll_counts(Counter(), p_6d6.rolls_with_counts(slice(None, None, -2))) ; every_other_d6
Counter({(6, 4, 2): 4110, (6, 5, 3): 3390, (6, 4, 3): 3330, ..., (3, 3, 3): 13, (2, 2, 2): 7, (1, 1, 1): 1})
>>> accumulate_roll_counts(Counter(), p_6d6.rolls_with_counts(5, 3, 1)) == every_other_d6
True
>>> accumulate_roll_counts(Counter(), p_6d6.rolls_with_counts(*range(5, 0, -2))) == every_other_d6
True
>>> accumulate_roll_counts(Counter(), p_6d6.rolls_with_counts(*(i for i in range(6, 0, -1) if i % 2 == 1))) == every_other_d6
True

One way to model the likelihood of achieving a “Yhatzee” (i.e., where five six-sided dice show the same face) on a single roll by checking rolls for where the least and greatest outcomes are the same.

1
2
3
4
5
6
7
8
>>> p_5d6 = 5@P(6)
>>> yhatzee_on_single_roll = H(
...   (1 if roll[0] == roll[-1] else 0, count)
...   for roll, count
...   in p_5d6.rolls_with_counts()
... )
>>> print(yhatzee_on_single_roll.format(width=0))
{..., 0: 99.92%, 1:  0.08%}

In the general case, rolls may appear more than once.

1
2
>>> list(P(H(2), H(3)).rolls_with_counts())
[((1, 1), 1), ((1, 2), 1), ((1, 3), 1), ((1, 2), 1), ((2, 2), 1), ((2, 3), 1)]

In the above, (1, 2) appears a total of two times, each with counts of one.

However, if the pool is homogeneous (meaning it only contains identical histograms), rolls (before selection) are not repeated. (See the note on implementation below.)

1
2
>>> list((2@P(H((-1, 0, 1)))).rolls_with_counts())
[((-1, -1), 1), ((-1, 0), 2), ((-1, 1), 2), ((0, 0), 1), ((0, 1), 2), ((1, 1), 1)]

Either way, by summing and counting all rolls, we can confirm identity.

1
2
3
4
5
>>> d6 = H(6)
>>> d6avg = H((2, 3, 3, 4, 4, 5))
>>> p = 2@P(d6, d6avg)
>>> H((sum(roll), count) for roll, count in p.rolls_with_counts()) == p.h() == d6 + d6 + d6avg + d6avg
True

This method does not try to outsmart callers by (mis)interpreting selection arguments. It honors selection identifier order and any redundancy.

1
2
3
4
>>> p_d3_d4 = P(H(3), H(4))
>>> # Select the second, first, then second (again) elements
>>> list(p_d3_d4.rolls_with_counts(-1, 0, 1))
[((1, 1, 1), 1), ((2, 1, 2), 1), ((3, 1, 3), 1), ((4, 1, 4), 1), ..., ((3, 1, 3), 1), ((3, 2, 3), 1), ((3, 3, 3), 1), ((4, 3, 4), 1)]

Selecting the same outcomes, but in a different order is not immediately comparable.

1
2
3
4
>>> select_0_1 = list(p_d3_d4.rolls_with_counts(0, 1))
>>> select_1_0 = list(p_d3_d4.rolls_with_counts(1, 0))
>>> select_0_1 == select_1_0
False

Equivalence can be tested when selected outcomes are sorted.

1
2
3
4
>>> sorted_0_1 = [(sorted(roll), count) for roll, count in select_0_1]
>>> sorted_1_0 = [(sorted(roll), count) for roll, count in select_1_0]
>>> sorted_0_1 == sorted_1_0
True

They can also be summed and counted which is equivalent to calling the h method with identical selection arguments.

1
2
3
4
>>> summed_0_1 = H((sum(roll), count) for roll, count in select_0_1)
>>> summed_1_0 = H((sum(roll), count) for roll, count in select_1_0)
>>> summed_0_1 == summed_1_0 == p_d3_d4.h(0, 1) == p_d3_d4.h(1, 0)
True

About the implementation

Enumeration is substantially more efficient for homogeneous pools than heterogeneous ones, because we are able to avoid the expensive enumeration of the Cartesian product using several techniques.

Taking \(k\) outcomes, where \(k\) selects fewer than all \(n\) outcomes a homogeneous pool benefits from Ilmari Karonen’s optimization, which appears to scale geometrically with \(k\) times some factor of \(n\) (e.g., \(\log n\), but I haven’t bothered to figure that out yet), such that—in observed testing, at least—it is generally the fastest approach for \(k < n\).

Where \(k = n\), we leverage the multinomial coefficient, which appears to scale generally with \(n\).

\[ {{n} \choose {{{k}_{1}},{{k}_{2}},\ldots,{{k}_{m}}}} = {\frac {{n}!} {{{k}_{1}}! {{k}_{2}}! \cdots {{k}_{m}}!}} \]

We enumerate combinations with replacements, and then the compute the number of permutations with repetitions for each combination. Consider n@P(H(m)). Enumerating combinations with replacements would yield all unique rolls.

((1, 1, , 1), (1, 1, , 2), , (1, 1, , m), , (m - 1, m, ,m), (m, m, , m))

To determine the count for a particular roll (a, b, , n), we compute the multinomial coefficient for that roll and multiply by the scalar H(m)[a] * H(m)[b] * * H(m)[n]. (See this for an in-depth exploration of the topic.)

Further, this implementation attempts to optimize heterogeneous pools by breaking them into homogeneous groups before computing the Cartesian product of those sub-results. This approach allows homogeneous pools to be ordered without duplicates, where heterogeneous ones offer no such guarantees.

As expected, this optimization allows the performance of arbitrary selection from mixed pools to sit between that of purely homogeneous and purely heterogeneous ones. Note, however, that all three appear to scale geometrically in some way.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
(P(6, 6, 6, 6)).h(slice(0)):
22.7 µs ± 463 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
(P(6, 6, 6, 6)).h(slice(1)):
740 µs ± 16.4 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
(P(6, 6, 6, 6)).h(slice(2)):
2.25 ms ± 97.7 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
(P(6, 6, 6, 6)).h(slice(3)):
4.44 ms ± 219 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
(P(6, 6, 6, 6, 6, 6)).h(slice(2)):
2.12 ms ± 26 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
(P(6, 6, 6, 6, 6, 6)).h(slice(3)):
4.46 ms ± 75.5 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
(P(6, 6, 6, 6, 6, 6)).h(slice(4)):
7.89 ms ± 186 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
(P(6, 6, 6, 6, 6, 6)).h(slice(5)):
12.7 ms ± 287 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

(P(H({-1: 1, 0: 1, 1: 1, 2: 1, 3: 1, 4: 1}), H({0: 1, 1: 1, 2: 1, 3: 1, 4: 1, 5: 1}), 6, 6)).h(slice(0)):
22.6 µs ± 820 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
(P(H({-1: 1, 0: 1, 1: 1, 2: 1, 3: 1, 4: 1}), H({0: 1, 1: 1, 2: 1, 3: 1, 4: 1, 5: 1}), 6, 6)).h(slice(1)):
2.51 ms ± 102 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
(P(H({-1: 1, 0: 1, 1: 1, 2: 1, 3: 1, 4: 1}), H({0: 1, 1: 1, 2: 1, 3: 1, 4: 1, 5: 1}), 6, 6)).h(slice(2)):
5.31 ms ± 63 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
(P(H({-1: 1, 0: 1, 1: 1, 2: 1, 3: 1, 4: 1}), H({0: 1, 1: 1, 2: 1, 3: 1, 4: 1, 5: 1}), 6, 6)).h(slice(3)):
5.37 ms ± 98.5 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
(P(H({-2: 1, -1: 1, 0: 1, 1: 1, 2: 1, 3: 1}), H({-1: 1, 0: 1, 1: 1, 2: 1, 3: 1, 4: 1}), H({0: 1, 1: 1, 2: 1, 3: 1, 4: 1, 5: 1}), 6, 6, 6)).h(slice(2)):
32.4 ms ± 1.36 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
(P(H({-2: 1, -1: 1, 0: 1, 1: 1, 2: 1, 3: 1}), H({-1: 1, 0: 1, 1: 1, 2: 1, 3: 1, 4: 1}), H({0: 1, 1: 1, 2: 1, 3: 1, 4: 1, 5: 1}), 6, 6, 6)).h(slice(3)):
78.6 ms ± 918 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
(P(H({-2: 1, -1: 1, 0: 1, 1: 1, 2: 1, 3: 1}), H({-1: 1, 0: 1, 1: 1, 2: 1, 3: 1, 4: 1}), H({0: 1, 1: 1, 2: 1, 3: 1, 4: 1, 5: 1}), 6, 6, 6)).h(slice(4)):
80.8 ms ± 2.18 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
(P(H({-2: 1, -1: 1, 0: 1, 1: 1, 2: 1, 3: 1}), H({-1: 1, 0: 1, 1: 1, 2: 1, 3: 1, 4: 1}), H({0: 1, 1: 1, 2: 1, 3: 1, 4: 1, 5: 1}), 6, 6, 6)).h(slice(5)):
79.9 ms ± 2.81 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)

(P(H({-3: 1, -2: 1, -1: 1, 0: 1, 1: 1, 2: 1}), H({-2: 1, -1: 1, 0: 1, 1: 1, 2: 1, 3: 1}), H({-1: 1, 0: 1, 1: 1, 2: 1, 3: 1, 4: 1}), H({0: 1, 1: 1, 2: 1, 3: 1, 4: 1, 5: 1}))).h(slice(0)):
22.6 µs ± 438 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
(P(H({-3: 1, -2: 1, -1: 1, 0: 1, 1: 1, 2: 1}), H({-2: 1, -1: 1, 0: 1, 1: 1, 2: 1, 3: 1}), H({-1: 1, 0: 1, 1: 1, 2: 1, 3: 1, 4: 1}), H({0: 1, 1: 1, 2: 1, 3: 1, 4: 1, 5: 1}))).h(slice(1)):
8.98 ms ± 169 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
(P(H({-3: 1, -2: 1, -1: 1, 0: 1, 1: 1, 2: 1}), H({-2: 1, -1: 1, 0: 1, 1: 1, 2: 1, 3: 1}), H({-1: 1, 0: 1, 1: 1, 2: 1, 3: 1, 4: 1}), H({0: 1, 1: 1, 2: 1, 3: 1, 4: 1, 5: 1}))).h(slice(2)):
9.09 ms ± 205 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
(P(H({-3: 1, -2: 1, -1: 1, 0: 1, 1: 1, 2: 1}), H({-2: 1, -1: 1, 0: 1, 1: 1, 2: 1, 3: 1}), H({-1: 1, 0: 1, 1: 1, 2: 1, 3: 1, 4: 1}), H({0: 1, 1: 1, 2: 1, 3: 1, 4: 1, 5: 1}))).h(slice(3)):
9.09 ms ± 350 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
(P(H({-5: 1, -4: 1, -3: 1, -2: 1, -1: 1, 0: 1}), H({-4: 1, -3: 1, -2: 1, -1: 1, 0: 1, 1: 1}), H({-3: 1, -2: 1, -1: 1, 0: 1, 1: 1, 2: 1}), H({-2: 1, -1: 1, 0: 1, 1: 1, 2: 1, 3: 1}), H({-1: 1, 0: 1, 1: 1, 2: 1, 3: 1, 4: 1}), H({0: 1, 1: 1, 2: 1, 3: 1, 4: 1, 5: 1}))).h(slice(2)):
312 ms ± 9.72 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
(P(H({-5: 1, -4: 1, -3: 1, -2: 1, -1: 1, 0: 1}), H({-4: 1, -3: 1, -2: 1, -1: 1, 0: 1, 1: 1}), H({-3: 1, -2: 1, -1: 1, 0: 1, 1: 1, 2: 1}), H({-2: 1, -1: 1, 0: 1, 1: 1, 2: 1, 3: 1}), H({-1: 1, 0: 1, 1: 1, 2: 1, 3: 1, 4: 1}), H({0: 1, 1: 1, 2: 1, 3: 1, 4: 1, 5: 1}))).h(slice(3)):
314 ms ± 2.45 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
(P(H({-5: 1, -4: 1, -3: 1, -2: 1, -1: 1, 0: 1}), H({-4: 1, -3: 1, -2: 1, -1: 1, 0: 1, 1: 1}), H({-3: 1, -2: 1, -1: 1, 0: 1, 1: 1, 2: 1}), H({-2: 1, -1: 1, 0: 1, 1: 1, 2: 1, 3: 1}), H({-1: 1, 0: 1, 1: 1, 2: 1, 3: 1, 4: 1}), H({0: 1, 1: 1, 2: 1, 3: 1, 4: 1, 5: 1}))).h(slice(4)):
334 ms ± 10.2 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
(P(H({-5: 1, -4: 1, -3: 1, -2: 1, -1: 1, 0: 1}), H({-4: 1, -3: 1, -2: 1, -1: 1, 0: 1, 1: 1}), H({-3: 1, -2: 1, -1: 1, 0: 1, 1: 1, 2: 1}), H({-2: 1, -1: 1, 0: 1, 1: 1, 2: 1, 3: 1}), H({-1: 1, 0: 1, 1: 1, 2: 1, 3: 1, 4: 1}), H({0: 1, 1: 1, 2: 1, 3: 1, 4: 1, 5: 1}))).h(slice(5)):
324 ms ± 5.48 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

Source: perf_rolls_with_counts.ipy

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
from dyce import H, P

for n in (4, 6):
  p = n@P(6)
  for i in range(len(p) - 4, len(p)):
    print(f"({p}).h(slice({i})):")
    %timeit p.h(slice(i))

print()

for n in (2, 3):
  p = P(n@P(6), *[H(6) - m for m in range(n, 0, -1)])
  for i in range(len(p) - 4, len(p)):
    print(f"({p}).h(slice({i})):")
    %timeit p.h(slice(i))

print()

for n in (4, 6):
  p = P(*[H(6) - m for m in range(n, 0, -1)])
  for i in range(len(p) - 4, len(p)):
    print(f"({p}).h(slice({i})):")
    %timeit p.h(slice(i))

print()

Source code in dyce/p.py
@beartype
def rolls_with_counts(self, *which: _GetItemT) -> Iterator[_RollCountT]:
    r"""
    Returns an iterator yielding two-tuples (pairs) that, collectively, enumerate all
    possible outcomes for the pool. The first item in the two-tuple is a sorted
    sequence of the outcomes for a distinct roll. The second is the count for that
    roll. Outcomes in each roll are ordered least to greatest.

    If one or more arguments are provided, this methods selects subsets of outcomes
    for each roll. Outcomes in each roll are ordered from least (index ``#!python
    0``) to greatest (index ``#!python -1`` or ``#!python len(self) - 1``).
    Identifiers can be ``#!python int``s or ``#!python slice``s, and can be mixed
    for more flexible selections.

    ``` python
    >>> from collections import Counter

    >>> def accumulate_roll_counts(counter, roll_counts):
    ...   for roll, count in roll_counts:
    ...     counter[roll] += count
    ...   return counter

    >>> p_6d6 = 6@P(6)
    >>> every_other_d6 = accumulate_roll_counts(Counter(), p_6d6.rolls_with_counts(slice(None, None, -2))) ; every_other_d6
    Counter({(6, 4, 2): 4110, (6, 5, 3): 3390, (6, 4, 3): 3330, ..., (3, 3, 3): 13, (2, 2, 2): 7, (1, 1, 1): 1})
    >>> accumulate_roll_counts(Counter(), p_6d6.rolls_with_counts(5, 3, 1)) == every_other_d6
    True
    >>> accumulate_roll_counts(Counter(), p_6d6.rolls_with_counts(*range(5, 0, -2))) == every_other_d6
    True
    >>> accumulate_roll_counts(Counter(), p_6d6.rolls_with_counts(*(i for i in range(6, 0, -1) if i % 2 == 1))) == every_other_d6
    True

    ```

    One way to model the likelihood of achieving a “Yhatzee” (i.e., where five
    six-sided dice show the same face) on a single roll by checking rolls for where
    the least and greatest outcomes are the same.

    ``` python
    >>> p_5d6 = 5@P(6)
    >>> yhatzee_on_single_roll = H(
    ...   (1 if roll[0] == roll[-1] else 0, count)
    ...   for roll, count
    ...   in p_5d6.rolls_with_counts()
    ... )
    >>> print(yhatzee_on_single_roll.format(width=0))
    {..., 0: 99.92%, 1:  0.08%}

    ```

    !!! note "In the general case, rolls may appear more than once."

    ``` python
    >>> list(P(H(2), H(3)).rolls_with_counts())
    [((1, 1), 1), ((1, 2), 1), ((1, 3), 1), ((1, 2), 1), ((2, 2), 1), ((2, 3), 1)]

    ```

    In the above, ``#!python (1, 2)`` appears a total of two times, each with counts
    of one.

    However, if the pool is homogeneous (meaning it only contains identical
    histograms), rolls (before selection) are not repeated. (See the note on
    implementation below.)

    ``` python
    >>> list((2@P(H((-1, 0, 1)))).rolls_with_counts())
    [((-1, -1), 1), ((-1, 0), 2), ((-1, 1), 2), ((0, 0), 1), ((0, 1), 2), ((1, 1), 1)]

    ```

    Either way, by summing and counting all rolls, we can confirm identity.

    ``` python
    >>> d6 = H(6)
    >>> d6avg = H((2, 3, 3, 4, 4, 5))
    >>> p = 2@P(d6, d6avg)
    >>> H((sum(roll), count) for roll, count in p.rolls_with_counts()) == p.h() == d6 + d6 + d6avg + d6avg
    True

    ```

    This method does not try to outsmart callers by (mis)interpreting selection
    arguments. It honors selection identifier order and any redundancy.

    ``` python
    >>> p_d3_d4 = P(H(3), H(4))
    >>> # Select the second, first, then second (again) elements
    >>> list(p_d3_d4.rolls_with_counts(-1, 0, 1))
    [((1, 1, 1), 1), ((2, 1, 2), 1), ((3, 1, 3), 1), ((4, 1, 4), 1), ..., ((3, 1, 3), 1), ((3, 2, 3), 1), ((3, 3, 3), 1), ((4, 3, 4), 1)]

    ```

    Selecting the same outcomes, but in a different order is not immediately
    comparable.

    ``` python
    >>> select_0_1 = list(p_d3_d4.rolls_with_counts(0, 1))
    >>> select_1_0 = list(p_d3_d4.rolls_with_counts(1, 0))
    >>> select_0_1 == select_1_0
    False

    ```

    Equivalence can be tested when selected outcomes are sorted.

    ``` python
    >>> sorted_0_1 = [(sorted(roll), count) for roll, count in select_0_1]
    >>> sorted_1_0 = [(sorted(roll), count) for roll, count in select_1_0]
    >>> sorted_0_1 == sorted_1_0
    True

    ```

    They can also be summed and counted which is equivalent to calling the
    [``h`` method][dyce.p.P.h] with identical selection arguments.

    ``` python
    >>> summed_0_1 = H((sum(roll), count) for roll, count in select_0_1)
    >>> summed_1_0 = H((sum(roll), count) for roll, count in select_1_0)
    >>> summed_0_1 == summed_1_0 == p_d3_d4.h(0, 1) == p_d3_d4.h(1, 0)
    True

    ```

    !!! info "About the implementation"

        Enumeration is substantially more efficient for homogeneous pools than
        heterogeneous ones, because we are able to avoid the expensive enumeration
        of the Cartesian product using several techniques.

        Taking $k$ outcomes, where $k$ selects fewer than all $n$ outcomes a
        homogeneous pool benefits from [Ilmari Karonen’s
        optimization](https://rpg.stackexchange.com/a/166663/71245), which appears
        to scale geometrically with $k$ times some factor of $n$ (e.g., $\log n$,
        but I haven’t bothered to figure that out yet), such that—in observed
        testing, at least—it is generally the fastest approach for $k < n$.

        Where $k = n$, we leverage the [*multinomial
        coefficient*](https://en.wikipedia.org/wiki/Permutation#Permutations_of_multisets),
        which appears to scale generally with $n$.

        $$
        {{n} \choose {{{k}_{1}},{{k}_{2}},\ldots,{{k}_{m}}}}
        = {\frac {{n}!} {{{k}_{1}}! {{k}_{2}}! \cdots {{k}_{m}}!}}
        $$

        We enumerate combinations with replacements, and then the compute the number
        of permutations with repetitions for each combination. Consider ``#!python
        n@P(H(m))``. Enumerating combinations with replacements would yield all
        unique rolls.

        ``#!python ((1, 1, …, 1), (1, 1, …, 2), …, (1, 1, …, m), …, (m - 1, m, …,
        m), (m, m, …, m))``

        To determine the count for a particular roll ``#!python (a, b, …, n)``, we
        compute the multinomial coefficient for that roll and multiply by the scalar
        ``#!python H(m)[a] * H(m)[b] * … * H(m)[n]``. (See
        [this](https://www.lucamoroni.it/the-dice-roll-sum-problem/) for an in-depth
        exploration of the topic.)

        Further, this implementation attempts to optimize heterogeneous pools by
        breaking them into homogeneous groups before computing the Cartesian product
        of those sub-results. This approach allows homogeneous pools to be ordered
        without duplicates, where heterogeneous ones offer no such guarantees.

        As expected, this optimization allows the performance of arbitrary selection
        from mixed pools to sit between that of purely homogeneous and purely
        heterogeneous ones. Note, however, that all three appear to scale
        geometrically in some way.

        ``` python
        --8<-- "docs/assets/perf_rolls_with_counts.txt"
        ```

        <details>
        <summary>Source: <a href="https://github.com/posita/dyce/blob/latest/docs/assets/perf_rolls_with_counts.ipy"><code>perf_rolls_with_counts.ipy</code></a></summary>

        ``` python
        --8<-- "docs/assets/perf_rolls_with_counts.ipy"
        ```
        </details>
    """
    n = len(self)

    if not which:
        i: Optional[int] = n
    else:
        i = _analyze_selection(n, which)

    if i == 0 or n == 0:
        rolls_with_counts_iter: Iterable[_RollCountT] = iter(())
    else:
        groups = tuple((h, sum(1 for _ in hs)) for h, hs in groupby(self))

        if len(groups) == 1:
            # Based on cursory performance analysis, calling the homogeneous
            # implementation directly provides about a 15% performance savings over
            # merely falling through to _rwc_heterogeneous_h_groups. Maybe
            # itertools.product adds significant overhead?
            h, hn = groups[0]
            assert hn == n

            # Still in search of a better (i.e., more efficient) way:
            # https://math.stackexchange.com/questions/4173084/probability-distribution-of-k-1-k-2-cdots-k-m-selections-of-arbitrary-posi
            if i and abs(i) < n:
                rolls_with_counts_iter = (
                    _rwc_homogeneous_n_h_using_karonen_partial_selection(
                        h, n, i, fill=0
                    )
                )
            else:
                rolls_with_counts_iter = (
                    _rwc_homogeneous_n_h_using_multinomial_coefficient(h, n)
                )
        else:
            rolls_with_counts_iter = _rwc_heterogeneous_h_groups(groups, i)

    for sorted_outcomes_for_roll, roll_count in rolls_with_counts_iter:
        if which:
            taken_outcomes = tuple(getitems(sorted_outcomes_for_roll, which))
        else:
            taken_outcomes = sorted_outcomes_for_roll

        yield taken_outcomes, roll_count

umap(self, op: _UnaryOperatorT) -> P

Shorthand for P(*(h.umap(op) for h in self)). See the H.umap method.

1
2
3
4
5
6
>>> import operator
>>> p_3d6 = 3@P(H((-3, -1, 2, 4)))
>>> p_3d6.umap(operator.__neg__)
P(H({-4: 1, -2: 1, 1: 1, 3: 1}), H({-4: 1, -2: 1, 1: 1, 3: 1}), H({-4: 1, -2: 1, 1: 1, 3: 1}))
>>> p_3d6.umap(operator.__abs__)
P(H({1: 1, 2: 1, 3: 1, 4: 1}), H({1: 1, 2: 1, 3: 1, 4: 1}), H({1: 1, 2: 1, 3: 1, 4: 1}))
Source code in dyce/p.py
@beartype
def umap(self, op: _UnaryOperatorT) -> P:
    r"""
    Shorthand for ``#!python P(*(h.umap(op) for h in self))``. See the
    [``H.umap`` method][dyce.h.H.umap].

    ``` python
    >>> import operator
    >>> p_3d6 = 3@P(H((-3, -1, 2, 4)))
    >>> p_3d6.umap(operator.__neg__)
    P(H({-4: 1, -2: 1, 1: 1, 3: 1}), H({-4: 1, -2: 1, 1: 1, 3: 1}), H({-4: 1, -2: 1, 1: 1, 3: 1}))
    >>> p_3d6.umap(operator.__abs__)
    P(H({1: 1, 2: 1, 3: 1, 4: 1}), H({1: 1, 2: 1, 3: 1, 4: 1}), H({1: 1, 2: 1, 3: 1, 4: 1}))

    ```
    """
    return P(*(h.umap(op) for h in self))