Skip to content

dyce package reference

dyce provides several core primitives:

  • H – histograms (outcomes or individual dice)
  • P – collections of histograms (pools)
  • R – scalars, histograms, pools, operators, etc. for assembling roller trees (see dyce.r for details)

H (Mapping, Generic)

An immutable mapping for use as a histogram which supports arithmetic operations. This is useful for modeling discrete outcomes, like individual dice. H objects encode finite discrete probability distributions as integer counts without any denominator.

Info

The lack of an explicit denominator is intentional and has two benefits. First, a denominator is redundant. Without it, one never has to worry about probabilities summing to one (e.g., via miscalculation, floating point error, etc.). Second (and perhaps more importantly), sometimes one wants to have an insight into non-reduced counts, not just probabilities. If needed, probabilities can always be derived, as shown below.

The initializer takes a single parameter, items. In its most explicit form, items maps outcome values to counts.

Modeling a single six-sided die (1d6) can be expressed as:

1
2
>>> from dyce import H
>>> d6 = H({1: 1, 2: 1, 3: 1, 4: 1, 5: 1, 6: 1})

An iterable of pairs can also be used (similar to dict).

1
2
>>> d6 == H(((1, 1), (2, 1), (3, 1), (4, 1), (5, 1), (6, 1)))
True

Two shorthands are provided. If items is an iterable of numbers, counts of 1 are assumed.

1
2
>>> d6 == H((1, 2, 3, 4, 5, 6))
True

Repeated items are accumulated, as one would expect.

1
2
>>> H((2, 3, 3, 4, 4, 5))
H({2: 1, 3: 2, 4: 2, 5: 1})

If items is an integer, it is shorthand for creating a sequential range \([{1} .. {items}]\) (or \([{items} .. {-1}]\) if items is negative).

1
2
>>> d6 == H(6)
True

Histograms are maps, so we can test equivalence against other maps.

1
2
>>> H(6) == {1: 1, 2: 1, 3: 1, 4: 1, 5: 1, 6: 1}
True

Simple indexes can be used to look up an outcome’s count.

1
2
>>> H((2, 3, 3, 4, 4, 5))[3]
2

Most arithmetic operators are supported and do what one would expect. If the operand is a number, the operator applies to the outcomes.

1
2
>>> d6 + 4
H({5: 1, 6: 1, 7: 1, 8: 1, 9: 1, 10: 1})
1
2
3
4
5
6
>>> d6 * -1
H({-6: 1, -5: 1, -4: 1, -3: 1, -2: 1, -1: 1})
>>> d6 * -1 == -d6
True
>>> d6 * -1 == H(-6)
True

If the operand is another histogram, combinations are computed. Modeling the sum of two six-sided dice (2d6) can be expressed as:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
>>> d6 + d6
H({2: 1, 3: 2, 4: 3, 5: 4, 6: 5, 7: 6, 8: 5, 9: 4, 10: 3, 11: 2, 12: 1})
>>> print((d6 + d6).format())
avg |    7.00
std |    2.42
var |    5.83
  2 |   2.78% |#
  3 |   5.56% |##
  4 |   8.33% |####
  5 |  11.11% |#####
  6 |  13.89% |######
  7 |  16.67% |########
  8 |  13.89% |######
  9 |  11.11% |#####
 10 |   8.33% |####
 11 |   5.56% |##
 12 |   2.78% |#

To sum \({n}\) identical histograms, the matrix multiplication operator (@) provides a shorthand.

1
2
>>> 3@d6 == d6 + d6 + d6
True

The len built-in function can be used to show the number of distinct outcomes.

1
2
>>> len(2@d6)
11

The total property can be used to compute the total number of combinations and each outcome’s probability.

1
2
3
4
5
>>> from fractions import Fraction
>>> (2@d6).total
36
>>> [(outcome, Fraction(count, (2@d6).total)) for outcome, count in (2@d6).items()]
[(2, Fraction(1, 36)), (3, Fraction(1, 18)), (4, Fraction(1, 12)), (5, Fraction(1, 9)), (6, Fraction(5, 36)), (7, Fraction(1, 6)), ..., (12, Fraction(1, 36))]

Histograms provide common comparators (e.g., eq ne, etc.). One way to count how often a first six-sided die shows a different face than a second is:

1
2
3
4
5
6
7
8
>>> d6.ne(d6)
H({False: 6, True: 30})
>>> print(d6.ne(d6).format())
avg |    0.83
std |    0.37
var |    0.14
  0 |  16.67% |########
  1 |  83.33% |#########################################

Or, how often a first six-sided die shows a face less than a second is:

1
2
3
4
5
6
7
8
>>> d6.lt(d6)
H({False: 21, True: 15})
>>> print(d6.lt(d6).format())
avg |    0.42
std |    0.49
var |    0.24
  0 |  58.33% |#############################
  1 |  41.67% |####################

Or how often at least one 2 will show when rolling four six-sided dice:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
>>> d6_eq2 = d6.eq(2) ; d6_eq2  # how often a 2 shows on a single six-sided die
H({False: 5, True: 1})
>>> 4@d6_eq2  # count of 2s showing on 4d6
H({0: 625, 1: 500, 2: 150, 3: 20, 4: 1})
>>> (4@d6_eq2).ge(1)  # how often that count is at least one
H({False: 625, True: 671})
>>> print((4@d6_eq2).ge(1).format())
avg |    0.52
std |    0.50
var |    0.25
  0 |  48.23% |########################
  1 |  51.77% |#########################

Mind your parentheses

Parentheses are often necessary to enforce the desired order of operations. This is most often an issue with the @ operator, because it behaves differently than the d operator in most dedicated grammars. More specifically, in Python, @ has a lower precedence than . and [].

1
2
3
4
5
6
7
8
>>> 2@d6[7]  # type: ignore [operator]
Traceback (most recent call last):
  ...
KeyError: 7
>>> 2@d6.le(7)  # probably not what was intended
H({2: 36})
>>> 2@d6.le(7) == 2@(d6.le(7))
True
1
2
3
4
5
6
>>> (2@d6)[7]
6
>>> (2@d6).le(7)
H({False: 15, True: 21})
>>> 2@d6.le(7) == (2@d6).le(7)
False

Counts are generally accumulated without reduction. To reduce, call the lowest_terms method.

1
2
3
4
>>> d6.ge(4)
H({False: 3, True: 3})
>>> d6.ge(4).lowest_terms()
H({False: 1, True: 1})

Testing equivalence implicitly performs reductions of operands.

1
2
>>> d6.ge(4) == d6.ge(4).lowest_terms()
True

__slots__: Union[str, Iterable[str]] special

total: int property readonly

Experimental

This propertyshould be considered experimental and may change or disappear in future versions.

Equivalent to sum(self.counts()).

__abs__(self) -> H special

Source code in dyce/h.py
@beartype
def __abs__(self) -> H:
    return self.umap(__abs__)

__add__(self, other: _OperandT) -> H special

Source code in dyce/h.py
@beartype
def __add__(self, other: _OperandT) -> H:
    try:
        return self.map(__add__, other)
    except NotImplementedError:
        return NotImplemented

__and__(self, other: Union[SupportsIntSCU, 'H', 'HableT']) -> H special

Source code in dyce/h.py
@beartype
def __and__(self, other: Union[SupportsIntSCU, "H", "HableT"]) -> H:
    try:
        if isinstance(other, SupportsInt):
            other = as_int(other)

        return self.map(__and__, other)
    except (NotImplementedError, TypeError):
        return NotImplemented

__eq__(self, other) -> bool special

Source code in dyce/h.py
@beartype
def __eq__(self, other) -> bool:
    if isinstance(other, HableT):
        return __eq__(self, other.h())
    elif isinstance(other, H):
        return __eq__(self.lowest_terms()._h, other.lowest_terms()._h)
    else:
        return super().__eq__(other)

__floordiv__(self, other: _OperandT) -> H special

Source code in dyce/h.py
@beartype
def __floordiv__(self, other: _OperandT) -> H:
    try:
        return self.map(__floordiv__, other)
    except NotImplementedError:
        return NotImplemented

__getitem__(self, key: RealLikeSCU) -> int special

Source code in dyce/h.py
@beartype
def __getitem__(self, key: RealLikeSCU) -> int:
    return __getitem__(self._h, key)

__hash__(self) -> int special

Return hash(self).

Source code in dyce/h.py
@beartype
def __hash__(self) -> int:
    return hash(frozenset(self._lowest_terms()))

__init__(self, items: _SourceT) -> None special

Initializer.

Source code in dyce/h.py
@beartype
def __init__(self, items: _SourceT) -> None:
    r"Initializer."
    super().__init__()
    self._simple_init: Optional[int] = None
    tmp: Counter[RealLikeSCU] = counter()

    if isinstance(items, MappingC):
        items = items.items()

    if isinstance(items, SupportsInt):
        if items != 0:
            self._simple_init = as_int(items)
            outcome_range = range(
                self._simple_init,
                0,
                1 if self._simple_init < 0 else -1,  # count toward zero
            )

            if isinstance(items, RealLike):
                outcome_type = type(items)
                tmp.update({outcome_type(i): 1 for i in outcome_range})
            else:
                tmp.update({i: 1 for i in outcome_range})
    elif isinstance(items, HableT):
        tmp.update(items.h())
    elif isinstance(items, IterableC):
        # items is either an Iterable[RealLikeSCU] or an Iterable[Tuple[RealLikeSCU,
        # SupportsIntSCU]] (although this technically supports
        # Iterable[Union[RealLikeSCU, Tuple[RealLikeSCU, SupportsIntSCU]]])
        for item in items:
            if isinstance(item, tuple):
                outcome, count = item
                tmp[outcome] += as_int(count)
            else:
                tmp[item] += 1
    else:
        raise ValueError(f"unrecognized initializer {items}")

    # Sort and omit zero counts. As of Python 3.7, insertion order of keys is
    # preserved.
    self._h: _MappingT = {
        outcome: tmp[outcome]
        for outcome in sorted_outcomes(tmp)
        if tmp[outcome] != 0
    }

__invert__(self) -> H special

Source code in dyce/h.py
@beartype
def __invert__(self) -> H:
    return self.umap(__invert__)

__iter__(self) -> Iterator[RealLikeSCU] special

Source code in dyce/h.py
@beartype
def __iter__(self) -> Iterator[RealLikeSCU]:
    return iter(self._h)

__len__(self) -> int special

Source code in dyce/h.py
@beartype
def __len__(self) -> int:
    return len(self._h)

__matmul__(self, other: SupportsIntSCU) -> H special

Source code in dyce/h.py
@beartype
def __matmul__(self, other: SupportsIntSCU) -> H:
    try:
        other = as_int(other)
    except TypeError:
        return NotImplemented

    if other < 0:
        raise ValueError("argument cannot be negative")
    else:
        return sum_h(repeat(self, other))

__mod__(self, other: _OperandT) -> H special

Source code in dyce/h.py
@beartype
def __mod__(self, other: _OperandT) -> H:
    try:
        return self.map(__mod__, other)
    except NotImplementedError:
        return NotImplemented

__mul__(self, other: _OperandT) -> H special

Source code in dyce/h.py
@beartype
def __mul__(self, other: _OperandT) -> H:
    try:
        return self.map(__mul__, other)
    except NotImplementedError:
        return NotImplemented

__ne__(self, other) -> bool special

Source code in dyce/h.py
@beartype
def __ne__(self, other) -> bool:
    if isinstance(other, HableT):
        return __ne__(self, other.h())
    elif isinstance(other, H):
        return not __eq__(self, other)
    else:
        return super().__ne__(other)

__neg__(self) -> H special

Source code in dyce/h.py
@beartype
def __neg__(self) -> H:
    return self.umap(__neg__)

__or__(self, other: Union[SupportsIntSCU, 'H', 'HableT']) -> H special

Source code in dyce/h.py
@beartype
def __or__(self, other: Union[SupportsIntSCU, "H", "HableT"]) -> H:
    try:
        if isinstance(other, SupportsInt):
            other = as_int(other)

        return self.map(__or__, other)
    except (NotImplementedError, TypeError):
        return NotImplemented

__pos__(self) -> H special

Source code in dyce/h.py
@beartype
def __pos__(self) -> H:
    return self.umap(__pos__)

__pow__(self, other: _OperandT) -> H special

Source code in dyce/h.py
@beartype
def __pow__(self, other: _OperandT) -> H:
    try:
        return self.map(__pow__, other)
    except NotImplementedError:
        return NotImplemented

__radd__(self, other: RealLikeSCU) -> H special

Source code in dyce/h.py
@beartype
def __radd__(self, other: RealLikeSCU) -> H:
    try:
        return self.rmap(other, __add__)
    except NotImplementedError:
        return NotImplemented

__rand__(self, other: SupportsIntSCU) -> H special

Source code in dyce/h.py
@beartype
def __rand__(self, other: SupportsIntSCU) -> H:
    try:
        return self.rmap(as_int(other), __and__)
    except (NotImplementedError, TypeError):
        return NotImplemented

__repr__(self) -> str special

Source code in dyce/h.py
@beartype
def __repr__(self) -> str:
    if self._simple_init is not None:
        arg = str(self._simple_init)
    elif sys.version_info >= (3, 8):
        arg = pformat(self._h, sort_dicts=False)
    else:
        arg = dict.__repr__(self._h)

    return f"{type(self).__name__}({arg})"

__rfloordiv__(self, other: RealLikeSCU) -> H special

Source code in dyce/h.py
@beartype
def __rfloordiv__(self, other: RealLikeSCU) -> H:  # type: ignore [misc]
    try:
        return self.rmap(other, __floordiv__)
    except NotImplementedError:
        return NotImplemented

__rmatmul__(self, other: SupportsIntSCU) -> H special

Source code in dyce/h.py
@beartype
def __rmatmul__(self, other: SupportsIntSCU) -> H:
    return self.__matmul__(other)

__rmod__(self, other: RealLikeSCU) -> H special

Source code in dyce/h.py
@beartype
def __rmod__(self, other: RealLikeSCU) -> H:
    try:
        return self.rmap(other, __mod__)
    except NotImplementedError:
        return NotImplemented

__rmul__(self, other: RealLikeSCU) -> H special

Source code in dyce/h.py
@beartype
def __rmul__(self, other: RealLikeSCU) -> H:
    try:
        return self.rmap(other, __mul__)
    except NotImplementedError:
        return NotImplemented

__ror__(self, other: SupportsIntSCU) -> H special

Source code in dyce/h.py
@beartype
def __ror__(self, other: SupportsIntSCU) -> H:
    try:
        return self.rmap(as_int(other), __or__)
    except (NotImplementedError, TypeError):
        return NotImplemented

__rpow__(self, other: RealLikeSCU) -> H special

Source code in dyce/h.py
@beartype
def __rpow__(self, other: RealLikeSCU) -> H:
    try:
        return self.rmap(other, __pow__)
    except NotImplementedError:
        return NotImplemented

__rsub__(self, other: RealLikeSCU) -> H special

Source code in dyce/h.py
@beartype
def __rsub__(self, other: RealLikeSCU) -> H:
    try:
        return self.rmap(other, __sub__)
    except NotImplementedError:
        return NotImplemented

__rtruediv__(self, other: RealLikeSCU) -> H special

Source code in dyce/h.py
@beartype
def __rtruediv__(self, other: RealLikeSCU) -> H:
    try:
        return self.rmap(other, __truediv__)
    except NotImplementedError:
        return NotImplemented

__rxor__(self, other: SupportsIntSCU) -> H special

Source code in dyce/h.py
@beartype
def __rxor__(self, other: SupportsIntSCU) -> H:
    try:
        return self.rmap(as_int(other), __xor__)
    except (NotImplementedError, TypeError):
        return NotImplemented

__sub__(self, other: _OperandT) -> H special

Source code in dyce/h.py
@beartype
def __sub__(self, other: _OperandT) -> H:
    try:
        return self.map(__sub__, other)
    except NotImplementedError:
        return NotImplemented

__truediv__(self, other: _OperandT) -> H special

Source code in dyce/h.py
@beartype
def __truediv__(self, other: _OperandT) -> H:
    try:
        return self.map(__truediv__, other)
    except NotImplementedError:
        return NotImplemented

__xor__(self, other: Union[SupportsIntSCU, 'H', 'HableT']) -> H special

Source code in dyce/h.py
@beartype
def __xor__(self, other: Union[SupportsIntSCU, "H", "HableT"]) -> H:
    try:
        if isinstance(other, SupportsInt):
            other = as_int(other)

        return self.map(__xor__, other)
    except NotImplementedError:
        return NotImplemented

accumulate(self, other: _SourceT) -> H

Accumulates counts.

1
2
>>> H(4).accumulate(H(6))
H({1: 2, 2: 2, 3: 2, 4: 2, 5: 1, 6: 1})
Source code in dyce/h.py
@beartype
def accumulate(self, other: _SourceT) -> H:
    r"""
    Accumulates counts.

    ``` python
    >>> H(4).accumulate(H(6))
    H({1: 2, 2: 2, 3: 2, 4: 2, 5: 1, 6: 1})

    ```
    """
    if isinstance(other, MappingC):
        other = other.items()
    elif not isinstance(other, IterableC):
        other = cast(Iterable[RealLikeSCU], (other,))

    return type(self)(chain(self.items(), cast(Iterable, other)))

counts(self) -> ValuesView[int]

More descriptive synonym for the values method.

Source code in dyce/h.py
@beartype
def counts(self) -> ValuesView[int]:
    r"""
    More descriptive synonym for the [``values`` method][dyce.h.H.values].
    """
    return self._h.values()

distribution(self, fill_items: Optional[_MappingT] = None, rational_t: _RationalInitializerT[_T] = <class 'fractions.Fraction'>) -> Iterator[Tuple[RealLikeSCU, _T]]

Presentation helper function returning an iterator for each outcome/count or outcome/probability pair.

1
2
3
4
5
>>> h = H((1, 2, 3, 3, 4, 4, 5, 6))
>>> list(h.distribution())
[(1, Fraction(1, 8)), (2, Fraction(1, 8)), (3, Fraction(1, 4)), (4, Fraction(1, 4)), (5, Fraction(1, 8)), (6, Fraction(1, 8))]
>>> list(h.ge(3).distribution())
[(False, Fraction(1, 4)), (True, Fraction(3, 4))]

If provided, fill_items supplies defaults for any “missing” outcomes.

1
2
3
4
>>> list(h.distribution())
[(1, Fraction(1, 8)), (2, Fraction(1, 8)), (3, Fraction(1, 4)), (4, Fraction(1, 4)), (5, Fraction(1, 8)), (6, Fraction(1, 8))]
>>> list(h.distribution(fill_items={0: 0, 7: 0}))
[(0, Fraction(0, 1)), (1, Fraction(1, 8)), (2, Fraction(1, 8)), (3, Fraction(1, 4)), (4, Fraction(1, 4)), (5, Fraction(1, 8)), (6, Fraction(1, 8)), (7, Fraction(0, 1))]

Experimental

The rational_t argument to this method should be considered experimental and may change or disappear in future versions.

If provided, rational_t must be a callable that takes two ints (a numerator and denominator) and returns an instance of a desired (but otherwise arbitrary) type.

1
2
>>> list(h.distribution(rational_t=lambda n, d: f"{n}/{d}"))
[(1, '1/8'), (2, '1/8'), (3, '2/8'), (4, '2/8'), (5, '1/8'), (6, '1/8')]
1
2
3
>>> import sympy
>>> list(h.distribution(rational_t=sympy.Rational))
[(1, 1/8), (2, 1/8), (3, 1/4), (4, 1/4), (5, 1/8), (6, 1/8)]
1
2
3
>>> import sage.rings.rational  # doctest: +SKIP
>>> list(h.distribution(rational_t=lambda n, d: sage.rings.rational.Rational((n, d))))  # doctest: +SKIP
[(1, 1/8), (2, 1/8), (3, 1/4), (4, 1/4), (5, 1/8), (6, 1/8)]

Note

The arguments passed to rational_t are not reduced to the lowest terms.

The rational_t argument is a convenience. Iteration or comprehension can be used to accomplish something similar.

1
2
>>> [(outcome, f"{probability.numerator}/{probability.denominator}") for outcome, probability in (h).distribution()]
[(1, '1/8'), (2, '1/8'), (3, '1/4'), (4, '1/4'), (5, '1/8'), (6, '1/8')]

Many number implementations can convert directly from fractions.Fractions.

1
2
3
>>> import sympy.abc
>>> [(outcome, sympy.Rational(probability)) for outcome, probability in (h + sympy.abc.x).distribution()]
[(x + 1, 1/8), (x + 2, 1/8), (x + 3, 1/4), (x + 4, 1/4), (x + 5, 1/8), (x + 6, 1/8)]
1
2
3
>>> import sage.rings.rational  # doctest: +SKIP
>>> [(outcome, sage.rings.rational.Rational(probability)) for outcome, probability in h.distribution()]  # doctest: +SKIP
[(1, 1/6), (2, 1/6), (3, 1/3), (4, 1/3), (5, 1/6), (6, 1/6)]
Source code in dyce/h.py
@experimental
@beartype
def distribution(
    self,
    fill_items: Optional[_MappingT] = None,
    # TODO(posita): See <https://github.com/python/mypy/issues/10854> for context on
    # all the @overload work-around nonsense above and remove those once that issue
    # is addressed.
    rational_t: _RationalInitializerT[_T] = cast(_RationalInitializerT, Fraction),
) -> Iterator[Tuple[RealLikeSCU, _T]]:
    r"""
    Presentation helper function returning an iterator for each outcome/count or
    outcome/probability pair.

    ``` python
    >>> h = H((1, 2, 3, 3, 4, 4, 5, 6))
    >>> list(h.distribution())
    [(1, Fraction(1, 8)), (2, Fraction(1, 8)), (3, Fraction(1, 4)), (4, Fraction(1, 4)), (5, Fraction(1, 8)), (6, Fraction(1, 8))]
    >>> list(h.ge(3).distribution())
    [(False, Fraction(1, 4)), (True, Fraction(3, 4))]

    ```

    If provided, *fill_items* supplies defaults for any “missing” outcomes.

    ``` python
    >>> list(h.distribution())
    [(1, Fraction(1, 8)), (2, Fraction(1, 8)), (3, Fraction(1, 4)), (4, Fraction(1, 4)), (5, Fraction(1, 8)), (6, Fraction(1, 8))]
    >>> list(h.distribution(fill_items={0: 0, 7: 0}))
    [(0, Fraction(0, 1)), (1, Fraction(1, 8)), (2, Fraction(1, 8)), (3, Fraction(1, 4)), (4, Fraction(1, 4)), (5, Fraction(1, 8)), (6, Fraction(1, 8)), (7, Fraction(0, 1))]

    ```

    !!! warning "Experimental"

        The *rational_t* argument to this method should be considered experimental
        and may change or disappear in future versions.

    If provided, *rational_t* must be a callable that takes two ``#!python int``s (a
    numerator and denominator) and returns an instance of a desired (but otherwise
    arbitrary) type.

    ``` python
    >>> list(h.distribution(rational_t=lambda n, d: f"{n}/{d}"))
    [(1, '1/8'), (2, '1/8'), (3, '2/8'), (4, '2/8'), (5, '1/8'), (6, '1/8')]

    ```

    ``` python
    >>> import sympy
    >>> list(h.distribution(rational_t=sympy.Rational))
    [(1, 1/8), (2, 1/8), (3, 1/4), (4, 1/4), (5, 1/8), (6, 1/8)]

    ```

    ``` python
    >>> import sage.rings.rational  # doctest: +SKIP
    >>> list(h.distribution(rational_t=lambda n, d: sage.rings.rational.Rational((n, d))))  # doctest: +SKIP
    [(1, 1/8), (2, 1/8), (3, 1/4), (4, 1/4), (5, 1/8), (6, 1/8)]

    ```

    !!! note

        The arguments passed to *rational_t* are not reduced to the lowest terms.

    The *rational_t* argument is a convenience. Iteration or comprehension can be
    used to accomplish something similar.

    ``` python
    >>> [(outcome, f"{probability.numerator}/{probability.denominator}") for outcome, probability in (h).distribution()]
    [(1, '1/8'), (2, '1/8'), (3, '1/4'), (4, '1/4'), (5, '1/8'), (6, '1/8')]

    ```

    Many number implementations can convert directly from ``#!python
    fractions.Fraction``s.

    ``` python
    >>> import sympy.abc
    >>> [(outcome, sympy.Rational(probability)) for outcome, probability in (h + sympy.abc.x).distribution()]
    [(x + 1, 1/8), (x + 2, 1/8), (x + 3, 1/4), (x + 4, 1/4), (x + 5, 1/8), (x + 6, 1/8)]

    ```

    ``` python
    >>> import sage.rings.rational  # doctest: +SKIP
    >>> [(outcome, sage.rings.rational.Rational(probability)) for outcome, probability in h.distribution()]  # doctest: +SKIP
    [(1, 1/6), (2, 1/6), (3, 1/3), (4, 1/3), (5, 1/6), (6, 1/6)]

    ```
    """
    if fill_items is None:
        fill_items = {}

    combined = dict(chain(fill_items.items(), self.items()))
    total = sum(combined.values()) or 1

    return (
        (outcome, rational_t(combined[outcome], total))
        for outcome in sorted_outcomes(combined)
    )

distribution_xy(self, fill_items: Optional[_MappingT] = None) -> Tuple[Tuple[RealLikeSCU, ...], Tuple[float, ...]]

Presentation helper function returning an iterator for a “zipped” arrangement of the output from the distribution method and ensures the values are floats.

1
2
3
4
>>> list(H(6).distribution())
[(1, Fraction(1, 6)), (2, Fraction(1, 6)), (3, Fraction(1, 6)), (4, Fraction(1, 6)), (5, Fraction(1, 6)), (6, Fraction(1, 6))]
>>> H(6).distribution_xy()
((1, 2, 3, 4, 5, 6), (0.16666666, 0.16666666, 0.16666666, 0.16666666, 0.16666666, 0.16666666))
Source code in dyce/h.py
@beartype
def distribution_xy(
    self,
    fill_items: Optional[_MappingT] = None,
) -> Tuple[Tuple[RealLikeSCU, ...], Tuple[float, ...]]:
    r"""
    Presentation helper function returning an iterator for a “zipped” arrangement of the
    output from the [``distribution`` method][dyce.h.H.distribution] and ensures the
    values are ``#!python float``s.

    ``` python
    >>> list(H(6).distribution())
    [(1, Fraction(1, 6)), (2, Fraction(1, 6)), (3, Fraction(1, 6)), (4, Fraction(1, 6)), (5, Fraction(1, 6)), (6, Fraction(1, 6))]
    >>> H(6).distribution_xy()
    ((1, 2, 3, 4, 5, 6), (0.16666666, 0.16666666, 0.16666666, 0.16666666, 0.16666666, 0.16666666))

    ```
    """
    # TODO(posita): See <https://github.com/python/typing/issues/193>
    return tuple(  # type: ignore [return-value]
        zip(
            *(
                (outcome, float(probability))
                for outcome, probability in self.distribution(fill_items)
            )
        )
    )

eq(self, other: _OperandT) -> H

Shorthand for self.map(operator.__eq__, other).umap(bool).

1
2
>>> H(6).eq(3)
H({False: 5, True: 1})

See the map and umap methods.

Source code in dyce/h.py
@beartype
def eq(
    self,
    other: _OperandT,
) -> H:
    r"""
    Shorthand for ``#!python self.map(operator.__eq__, other).umap(bool)``.

    ``` python
    >>> H(6).eq(3)
    H({False: 5, True: 1})

    ```

    See the [``map``][dyce.h.H.map] and [``umap``][dyce.h.H.umap] methods.
    """
    return self.map(__eq__, other).umap(bool)

exactly_k_times_in_n(self, outcome: RealLikeSCU, n: SupportsIntSCU, k: SupportsIntSCU) -> int

Experimental

This method should be considered experimental and may change or disappear in future versions.

Computes and returns the probability distribution where outcome appears exactly k times among n@self.

1
2
3
4
5
6
>>> H(6).exactly_k_times_in_n(outcome=5, n=4, k=2)
150
>>> H((2, 3, 3, 4, 4, 5)).exactly_k_times_in_n(outcome=2, n=3, k=3)
1
>>> H((2, 3, 3, 4, 4, 5)).exactly_k_times_in_n(outcome=4, n=3, k=3)
8
Source code in dyce/h.py
@experimental
@beartype
def exactly_k_times_in_n(
    self,
    outcome: RealLikeSCU,
    n: SupportsIntSCU,
    k: SupportsIntSCU,
) -> int:
    r"""
    !!! warning "Experimental"

        This method should be considered experimental and may change or disappear in
        future versions.

    Computes and returns the probability distribution where *outcome* appears
    exactly *k* times among ``#!python n@self``.

    ``` python
    >>> H(6).exactly_k_times_in_n(outcome=5, n=4, k=2)
    150
    >>> H((2, 3, 3, 4, 4, 5)).exactly_k_times_in_n(outcome=2, n=3, k=3)
    1
    >>> H((2, 3, 3, 4, 4, 5)).exactly_k_times_in_n(outcome=4, n=3, k=3)
    8

    ```
    """
    n = as_int(n)
    k = as_int(k)
    assert k <= n
    c_outcome = self.get(outcome, 0)

    return comb(n, k) * c_outcome ** k * (self.total - c_outcome) ** (n - k)

explode(self, max_depth: SupportsIntSCU = 1) -> H

Shorthand for self.substitute(lambda h, outcome: h if outcome == max(h)else outcome, operator.__add__, max_depth).

1
2
>>> H(6).explode(max_depth=2)
H({1: 36, 2: 36, 3: 36, 4: 36, 5: 36, 7: 6, 8: 6, 9: 6, 10: 6, 11: 6, 13: 1, 14: 1, 15: 1, 16: 1, 17: 1, 18: 1})

See the substitute method.

Source code in dyce/h.py
@beartype
def explode(self, max_depth: SupportsIntSCU = 1) -> H:
    r"""
    Shorthand for ``#!python self.substitute(lambda h, outcome: h if outcome == max(h)
    else outcome, operator.__add__, max_depth)``.

    ``` python
    >>> H(6).explode(max_depth=2)
    H({1: 36, 2: 36, 3: 36, 4: 36, 5: 36, 7: 6, 8: 6, 9: 6, 10: 6, 11: 6, 13: 1, 14: 1, 15: 1, 16: 1, 17: 1, 18: 1})

    ```

    See the [``substitute`` method][dyce.h.H.substitute].
    """
    return self.substitute(
        lambda h, outcome: h if outcome == max(h) else outcome,
        __add__,
        max_depth,
    )

format(self, fill_items: Optional[_MappingT] = None, width: SupportsIntSCU = 65, scaled: bool = False, tick: str = '#', sep: str = '\n') -> str

Returns a formatted string representation of the histogram. If provided, fill_items supplies defaults for any missing outcomes. If width is greater than zero, a horizontal bar ASCII graph is printed using tick and sep (which are otherwise ignored if width is zero or less).

1
2
>>> print(H(6).format(width=0))
{avg: 3.50, 1: 16.67%, 2: 16.67%, 3: 16.67%, 4: 16.67%, 5: 16.67%, 6: 16.67%}
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
>>> print((2@H(6)).format(fill_items={i: 0 for i in range(1, 21)}, tick="@"))
avg |    7.00
std |    2.42
var |    5.83
  1 |   0.00% |
  2 |   2.78% |@
  3 |   5.56% |@@
  4 |   8.33% |@@@@
  5 |  11.11% |@@@@@
  6 |  13.89% |@@@@@@
  7 |  16.67% |@@@@@@@@
  8 |  13.89% |@@@@@@
  9 |  11.11% |@@@@@
 10 |   8.33% |@@@@
 11 |   5.56% |@@
 12 |   2.78% |@
 13 |   0.00% |
 14 |   0.00% |
 15 |   0.00% |
 16 |   0.00% |
 17 |   0.00% |
 18 |   0.00% |
 19 |   0.00% |
 20 |   0.00% |

If scaled is True, horizontal bars are scaled to width.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
>>> h = (2@H(6)).ge(7)
>>> print(f"{' 65 chars wide -->|':->65}")
---------------------------------------------- 65 chars wide -->|
>>> print(h.format(scaled=False))
avg |    0.58
std |    0.49
var |    0.24
  0 |  41.67% |####################
  1 |  58.33% |#############################
>>> print(h.format(scaled=True))
avg |    0.58
std |    0.49
var |    0.24
  0 |  41.67% |###################################
  1 |  58.33% |##################################################
Source code in dyce/h.py
@beartype
def format(
    self,
    fill_items: Optional[_MappingT] = None,
    width: SupportsIntSCU = _ROW_WIDTH,
    scaled: bool = False,
    tick: str = "#",
    sep: str = os.linesep,
) -> str:
    r"""
    Returns a formatted string representation of the histogram. If provided,
    *fill_items* supplies defaults for any missing outcomes. If *width* is greater
    than zero, a horizontal bar ASCII graph is printed using *tick* and *sep* (which
    are otherwise ignored if *width* is zero or less).

    ``` python
    >>> print(H(6).format(width=0))
    {avg: 3.50, 1: 16.67%, 2: 16.67%, 3: 16.67%, 4: 16.67%, 5: 16.67%, 6: 16.67%}

    ```

    ``` python
    >>> print((2@H(6)).format(fill_items={i: 0 for i in range(1, 21)}, tick="@"))
    avg |    7.00
    std |    2.42
    var |    5.83
      1 |   0.00% |
      2 |   2.78% |@
      3 |   5.56% |@@
      4 |   8.33% |@@@@
      5 |  11.11% |@@@@@
      6 |  13.89% |@@@@@@
      7 |  16.67% |@@@@@@@@
      8 |  13.89% |@@@@@@
      9 |  11.11% |@@@@@
     10 |   8.33% |@@@@
     11 |   5.56% |@@
     12 |   2.78% |@
     13 |   0.00% |
     14 |   0.00% |
     15 |   0.00% |
     16 |   0.00% |
     17 |   0.00% |
     18 |   0.00% |
     19 |   0.00% |
     20 |   0.00% |

    ```

    If *scaled* is ``#!python True``, horizontal bars are scaled to *width*.

    ``` python
    >>> h = (2@H(6)).ge(7)
    >>> print(f"{' 65 chars wide -->|':->65}")
    ---------------------------------------------- 65 chars wide -->|
    >>> print(h.format(scaled=False))
    avg |    0.58
    std |    0.49
    var |    0.24
      0 |  41.67% |####################
      1 |  58.33% |#############################
    >>> print(h.format(scaled=True))
    avg |    0.58
    std |    0.49
    var |    0.24
      0 |  41.67% |###################################
      1 |  58.33% |##################################################

    ```
    """
    width = as_int(width)

    # We convert various values herein to native ints and floats because number
    # tower implementations sometimes neglect to implement __format__ properly (or
    # at all). (I'm looking at you, sage.rings.…!)
    try:
        mu: RealLikeSCU = float(self.mean())
    except TypeError:
        mu = self.mean()

    if width <= 0:

        def _parts() -> Iterator[str]:
            yield f"avg: {mu:.2f}"

            for (
                outcome,
                probability,
            ) in self.distribution(fill_items):
                probability_f = float(probability)
                yield f"{outcome}:{probability_f:7.2%}"

        return "{" + ", ".join(_parts()) + "}"
    else:
        w = width - 15

        @beartype
        def lines() -> Iterator[str]:
            yield f"avg | {mu:7.2f}"

            try:
                std = float(self.stdev(mu))
                var = float(self.variance(mu))
                yield f"std | {std:7.2f}"
                yield f"var | {var:7.2f}"
            except TypeError:
                pass

            if self:
                outcomes, probabilities = self.distribution_xy(fill_items)
                tick_scale = max(probabilities) if scaled else 1.0

                for outcome, probability in zip(outcomes, probabilities):
                    try:
                        outcome_str = f"{outcome: 3}"
                    except (TypeError, ValueError):
                        outcome_str = str(outcome)
                        outcome_str = f"{outcome_str: >3}"

                    ticks = tick * int(w * probability / tick_scale)
                    probability_f = float(probability)
                    yield f"{outcome_str} | {probability_f:7.2%} |{ticks}"

        return sep.join(lines())

ge(self, other: _OperandT) -> H

Shorthand for self.map(operator.__ge__, other).umap(bool).

1
2
>>> H(6).ge(3)
H({False: 2, True: 4})

See the map and umap methods.

Source code in dyce/h.py
@beartype
def ge(
    self,
    other: _OperandT,
) -> H:
    r"""
    Shorthand for ``#!python self.map(operator.__ge__, other).umap(bool)``.

    ``` python
    >>> H(6).ge(3)
    H({False: 2, True: 4})

    ```

    See the [``map``][dyce.h.H.map] and [``umap``][dyce.h.H.umap] methods.
    """
    return self.map(__ge__, other).umap(bool)

gt(self, other: _OperandT) -> H

Shorthand for self.map(operator.__gt__, other).umap(bool).

1
2
>>> H(6).gt(3)
H({False: 3, True: 3})

See the map and umap methods.

Source code in dyce/h.py
@beartype
def gt(
    self,
    other: _OperandT,
) -> H:
    r"""
    Shorthand for ``#!python self.map(operator.__gt__, other).umap(bool)``.

    ``` python
    >>> H(6).gt(3)
    H({False: 3, True: 3})

    ```

    See the [``map``][dyce.h.H.map] and [``umap``][dyce.h.H.umap] methods.
    """
    return self.map(__gt__, other).umap(bool)

is_even(self) -> H

Equivalent to self.umap(dyce.types.is_even).

1
2
>>> H((-4, -2, 0, 1, 2, 3)).is_even()
H({False: 2, True: 4})

See the umap method.

Source code in dyce/h.py
@beartype
def is_even(self) -> H:
    r"""
    Equivalent to ``#!python self.umap(dyce.types.is_even)``.

    ``` python
    >>> H((-4, -2, 0, 1, 2, 3)).is_even()
    H({False: 2, True: 4})

    ```

    See the [``umap`` method][dyce.h.H.umap].
    """
    return self.umap(is_even)

is_odd(self) -> H

Equivalent to self.umap(dyce.types.is_odd).

1
2
>>> H((-4, -2, 0, 1, 2, 3)).is_odd()
H({False: 4, True: 2})

See the umap method.

Source code in dyce/h.py
@beartype
def is_odd(self) -> H:
    r"""
    Equivalent to ``#!python self.umap(dyce.types.is_odd)``.

    ``` python
    >>> H((-4, -2, 0, 1, 2, 3)).is_odd()
    H({False: 4, True: 2})

    ```

    See the [``umap`` method][dyce.h.H.umap].
    """
    return self.umap(is_odd)

items(self) -> ItemsView[RealLikeSCU, int]

D.items() -> a set-like object providing a view on D's items

Source code in dyce/h.py
@beartype
def items(self) -> ItemsView[RealLikeSCU, int]:
    return self._h.items()

keys(self) -> KeysView[RealLikeSCU]

D.keys() -> a set-like object providing a view on D's keys

Source code in dyce/h.py
@beartype
def keys(self) -> KeysView[RealLikeSCU]:
    return self.outcomes()

le(self, other: _OperandT) -> H

Shorthand for self.map(operator.__le__, other).umap(bool).

1
2
>>> H(6).le(3)
H({False: 3, True: 3})

See the map and umap methods.

Source code in dyce/h.py
@beartype
def le(
    self,
    other: _OperandT,
) -> H:
    r"""
    Shorthand for ``#!python self.map(operator.__le__, other).umap(bool)``.

    ``` python
    >>> H(6).le(3)
    H({False: 3, True: 3})

    ```

    See the [``map``][dyce.h.H.map] and [``umap``][dyce.h.H.umap] methods.
    """
    return self.map(__le__, other).umap(bool)

lowest_terms(self) -> H

Computes and returns a histogram whose counts share a greatest common divisor of 1.

1
2
3
4
>>> df = H((-1, -1, 0, 0, 1, 1)) ; df
H({-1: 2, 0: 2, 1: 2})
>>> df.lowest_terms()
H({-1: 1, 0: 1, 1: 1})
1
2
3
4
>>> d6avg = H((2, 2, 3, 3, 3, 3, 4, 4, 4, 4, 5, 5)) ; d6avg
H({2: 2, 3: 4, 4: 4, 5: 2})
>>> d6avg.lowest_terms()
H({2: 1, 3: 2, 4: 2, 5: 1})
Source code in dyce/h.py
@beartype
def lowest_terms(self) -> H:
    r"""
    Computes and returns a histogram whose counts share a greatest common divisor of 1.

    ``` python
    >>> df = H((-1, -1, 0, 0, 1, 1)) ; df
    H({-1: 2, 0: 2, 1: 2})
    >>> df.lowest_terms()
    H({-1: 1, 0: 1, 1: 1})

    ```

    ``` python
    >>> d6avg = H((2, 2, 3, 3, 3, 3, 4, 4, 4, 4, 5, 5)) ; d6avg
    H({2: 2, 3: 4, 4: 4, 5: 2})
    >>> d6avg.lowest_terms()
    H({2: 1, 3: 2, 4: 2, 5: 1})

    ```
    """
    return type(self)(self._lowest_terms())

lt(self, other: _OperandT) -> H

Shorthand for self.map(operator.__lt__, other).umap(bool).

1
2
>>> H(6).lt(3)
H({False: 4, True: 2})

See the map and umap methods.

Source code in dyce/h.py
@beartype
def lt(
    self,
    other: _OperandT,
) -> H:
    r"""
    Shorthand for ``#!python self.map(operator.__lt__, other).umap(bool)``.

    ``` python
    >>> H(6).lt(3)
    H({False: 4, True: 2})

    ```

    See the [``map``][dyce.h.H.map] and [``umap``][dyce.h.H.umap] methods.
    """
    return self.map(__lt__, other).umap(bool)

map(self, bin_op: _BinaryOperatorT, right_operand: _OperandT) -> H

Applies bin_op to each outcome of the histogram as the left operand and right_operand as the right. Shorthands exist for many arithmetic operators and comparators.

1
2
3
4
5
6
>>> import operator
>>> d6 = H(6)
>>> d6.map(operator.__add__, d6)
H({2: 1, 3: 2, 4: 3, 5: 4, 6: 5, 7: 6, 8: 5, 9: 4, 10: 3, 11: 2, 12: 1})
>>> d6.map(operator.__add__, d6) == d6 + d6
True
1
2
3
4
>>> d6.map(operator.__pow__, 2)
H({1: 1, 4: 1, 9: 1, 16: 1, 25: 1, 36: 1})
>>> d6.map(operator.__pow__, 2) == d6 ** 2
True
1
2
3
4
>>> d6.map(operator.__gt__, 3)
H({False: 3, True: 3})
>>> d6.map(operator.__gt__, 3) == d6.gt(3)
True
Source code in dyce/h.py
@beartype
def map(
    self,
    bin_op: _BinaryOperatorT,
    right_operand: _OperandT,
) -> H:
    r"""
    Applies *bin_op* to each outcome of the histogram as the left operand and
    *right_operand* as the right. Shorthands exist for many arithmetic operators and
    comparators.

    ``` python
    >>> import operator
    >>> d6 = H(6)
    >>> d6.map(operator.__add__, d6)
    H({2: 1, 3: 2, 4: 3, 5: 4, 6: 5, 7: 6, 8: 5, 9: 4, 10: 3, 11: 2, 12: 1})
    >>> d6.map(operator.__add__, d6) == d6 + d6
    True

    ```

    ``` python
    >>> d6.map(operator.__pow__, 2)
    H({1: 1, 4: 1, 9: 1, 16: 1, 25: 1, 36: 1})
    >>> d6.map(operator.__pow__, 2) == d6 ** 2
    True

    ```

    ``` python
    >>> d6.map(operator.__gt__, 3)
    H({False: 3, True: 3})
    >>> d6.map(operator.__gt__, 3) == d6.gt(3)
    True

    ```
    """
    if isinstance(right_operand, HableT):
        right_operand = right_operand.h()

    if isinstance(right_operand, H):
        return type(self)(
            (bin_op(s, o), self[s] * right_operand[o])
            for s, o in product(self, right_operand)
        )
    else:
        return type(self)(
            (bin_op(outcome, right_operand), count)
            for outcome, count in self.items()
        )

mean(self) -> RealLikeSCU

Returns the mean of the weighted outcomes (or 0.0 if there are no outcomes).

Source code in dyce/h.py
@beartype
def mean(self) -> RealLikeSCU:
    r"""
    Returns the mean of the weighted outcomes (or 0.0 if there are no outcomes).
    """
    numerator: float
    denominator: float
    numerator = denominator = 0

    for outcome, count in self.items():
        numerator += outcome * count
        denominator += count

    return numerator / (denominator or 1)

ne(self, other: _OperandT) -> H

Shorthand for self.map(operator.__ne__, other).umap(bool).

1
2
>>> H(6).ne(3)
H({False: 1, True: 5})

See the map and umap methods.

Source code in dyce/h.py
@beartype
def ne(
    self,
    other: _OperandT,
) -> H:
    r"""
    Shorthand for ``#!python self.map(operator.__ne__, other).umap(bool)``.

    ``` python
    >>> H(6).ne(3)
    H({False: 1, True: 5})

    ```

    See the [``map``][dyce.h.H.map] and [``umap``][dyce.h.H.umap] methods.
    """
    return self.map(__ne__, other).umap(bool)

order_stat_for_n_at_pos(self, n: SupportsIntSCU, pos: SupportsIntSCU) -> H

Experimental

This method should be considered experimental and may change or disappear in future versions.

Shorthand for self.order_stat_func_for_n(n)(pos).

Source code in dyce/h.py
@experimental
@beartype
def order_stat_for_n_at_pos(self, n: SupportsIntSCU, pos: SupportsIntSCU) -> H:
    r"""
    !!! warning "Experimental"

        This method should be considered experimental and may change or disappear in
        future versions.

    Shorthand for ``#!python self.order_stat_func_for_n(n)(pos)``.
    """
    return self.order_stat_func_for_n(n)(pos)

order_stat_func_for_n(self, n: SupportsIntSCU) -> Callable[[SupportsIntSCU], 'H']

Experimental

This method should be considered experimental and may change or disappear in future versions.

Returns a function that takes a single argument (pos) and computes the probability distribution for each outcome appearing in that position among n@self.

1
2
3
4
>>> d6avg = H((2, 3, 3, 4, 4, 5))
>>> order_stat_for_5d6avg = d6avg.order_stat_func_for_n(5)
>>> order_stat_for_5d6avg(3)  # counts where outcome appears at index 3
H({2: 26, 3: 1432, 4: 4792, 5: 1526})

The results show that, when rolling five six-sided “averaging” dice and sorting each roll, there are 26 ways where 2 appears at the fourth (index 3) position, 1432 ways where 3 appears at the fourth position, etc. This can be verified independently using the computationally expensive method of enumerating rolls and counting those that meet the criteria.

1
2
3
4
>>> from dyce import P
>>> p_5d6avg = 5@P(d6avg)
>>> sum(count for roll, count in p_5d6avg.rolls_with_counts() if roll[3] == 5)
1526

This method exists in addition to the H.order_stat_for_n_at_pos method because computing the betas for each outcome in n is unnecessary for each pos. Where different pos values are needed for the same n (e.g., in a loop) and where n is large, that overhead can be significant. The returned function caches those betas for n such that repeated querying or results at pos can be computed much faster.

1
2
3
4
5
%timeit [H(6).order_stat_for_n_at_pos(30, i) for i in range(10)]
462 ms ± 16.9 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

%timeit order_stat_for_30d6_at_pos = H(6).order_stat_func_for_n(30) ; [order_stat_for_30d6_at_pos(i) for i in range(10)]
54.4 ms ± 1.47 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
Source: perf_order_stat_for_n.ipy
1
2
3
4
5
6
7
8
9
from dyce import H

print(f"%timeit [H(6).order_stat_for_n_at_pos(30, i) for i in range(10)]")
%timeit [H(6).order_stat_for_n_at_pos(30, i) for i in range(10)]
print()

print(f"%timeit order_stat_for_30d6_at_pos = H(6).order_stat_func_for_n(30) ; [order_stat_for_30d6_at_pos(i) for i in range(10)]")
%timeit order_stat_for_30d6_at_pos = H(6).order_stat_func_for_n(30) ; [order_stat_for_30d6_at_pos(i) for i in range(10)]
print()
Source code in dyce/h.py
@experimental
@beartype
def order_stat_func_for_n(
    self, n: SupportsIntSCU
) -> Callable[[SupportsIntSCU], "H"]:
    r"""
    !!! warning "Experimental"

        This method should be considered experimental and may change or disappear in
        future versions.

    Returns a function that takes a single argument (*pos*) and computes the
    probability distribution for each outcome appearing in that position among
    ``#!python n@self``.

    ``` python
    >>> d6avg = H((2, 3, 3, 4, 4, 5))
    >>> order_stat_for_5d6avg = d6avg.order_stat_func_for_n(5)
    >>> order_stat_for_5d6avg(3)  # counts where outcome appears at index 3
    H({2: 26, 3: 1432, 4: 4792, 5: 1526})

    ```

    The results show that, when rolling five six-sided “averaging” dice and sorting
    each roll, there are 26 ways where ``#!python 2`` appears at the fourth (index
    ``#!python 3``) position, 1432 ways where ``#!python 3`` appears at the fourth
    position, etc. This can be verified independently using the computationally
    expensive method of enumerating rolls and counting those that meet the criteria.

    ``` python
    >>> from dyce import P
    >>> p_5d6avg = 5@P(d6avg)
    >>> sum(count for roll, count in p_5d6avg.rolls_with_counts() if roll[3] == 5)
    1526

    ```

    This method exists in addition to the
    [``H.order_stat_for_n_at_pos`` method][dyce.h.H.order_stat_for_n_at_pos] because
    computing the betas for each outcome in *n* is unnecessary for each *pos*. Where
    different *pos* values are needed for the same *n* (e.g., in a loop) and where
    *n* is large, that overhead can be significant. The returned function caches
    those betas for *n* such that repeated querying or results at *pos* can be
    computed much faster.

    ``` python
    --8<-- "docs/assets/perf_order_stat_for_n.txt"
    ```

    <details>
    <summary>Source: <a href="https://github.com/posita/dyce/blob/latest/docs/assets/perf_order_stat_for_n.ipy"><code>perf_order_stat_for_n.ipy</code></a></summary>

    ``` python
    --8<-- "docs/assets/perf_order_stat_for_n.ipy"
    ```
    </details>
    """
    betas_by_outcome: Dict[RealLikeSCU, Tuple[H, H]] = {}

    for outcome in self.outcomes():
        betas_by_outcome[outcome] = (
            n @ self.le(outcome),
            n @ self.lt(outcome),
        )

    def _gen_h_items_at_pos(pos: int) -> Iterator[Tuple[RealLikeSCU, int]]:
        for outcome, (h_le, h_lt) in betas_by_outcome.items():
            yield (
                outcome,
                h_le.gt(pos).get(True, 0) - h_lt.gt(pos).get(True, 0),
            )

    @beartype
    def order_stat_for_n_at_pos(pos: SupportsIntSCU) -> H:
        return type(self)(_gen_h_items_at_pos(as_int(pos)))

    return order_stat_for_n_at_pos

outcomes(self) -> KeysView[RealLikeSCU]

More descriptive synonym for the keys method.

Source code in dyce/h.py
@beartype
def outcomes(self) -> KeysView[RealLikeSCU]:
    r"""
    More descriptive synonym for the [``keys`` method][dyce.h.H.keys].
    """
    return self._h.keys()

rmap(self, left_operand: RealLikeSCU, bin_op: _BinaryOperatorT) -> H

Analogous to the map method, but where the caller supplies left_operand.

1
2
3
4
5
6
>>> import operator
>>> d6 = H(6)
>>> d6.rmap(2, operator.__pow__)
H({2: 1, 4: 1, 8: 1, 16: 1, 32: 1, 64: 1})
>>> d6.rmap(2, operator.__pow__) == 2 ** d6
True

Note

The positions of left_operand and bin_op are different from map method. This is intentional and serves as a reminder of operand ordering.

Source code in dyce/h.py
@beartype
def rmap(
    self,
    left_operand: RealLikeSCU,
    bin_op: _BinaryOperatorT,
) -> H:
    r"""
    Analogous to the [``map`` method][dyce.h.H.map], but where the caller supplies
    *left_operand*.

    ``` python
    >>> import operator
    >>> d6 = H(6)
    >>> d6.rmap(2, operator.__pow__)
    H({2: 1, 4: 1, 8: 1, 16: 1, 32: 1, 64: 1})
    >>> d6.rmap(2, operator.__pow__) == 2 ** d6
    True

    ```

    !!! note

        The positions of *left_operand* and *bin_op* are different from
        [``map`` method][dyce.h.H.map]. This is intentional and serves as a reminder
        of operand ordering.
    """
    return type(self)(
        (bin_op(left_operand, outcome), count) for outcome, count in self.items()
    )

roll(self) -> RealLikeSCU

Returns a (weighted) random outcome, sorted.

Source code in dyce/h.py
@beartype
def roll(self) -> RealLikeSCU:
    r"""
    Returns a (weighted) random outcome, sorted.
    """
    return (
        rng.RNG.choices(
            population=tuple(self.outcomes()),
            weights=tuple(self.counts()),
            k=1,
        )[0]
        if self
        else 0
    )

stdev(self, mu: Optional[RealLikeSCU] = None) -> RealLikeSCU

Shorthand for math.sqrt(self.variance(mu)).

Source code in dyce/h.py
@beartype
def stdev(self, mu: Optional[RealLikeSCU] = None) -> RealLikeSCU:
    r"""
    Shorthand for ``#!python math.sqrt(self.variance(mu))``.
    """
    return sqrt(self.variance(mu))

substitute(self, expand: _ExpandT, coalesce: _CoalesceT = <function coalesce_replace at 0x10d423040>, max_depth: SupportsIntSCU = 1) -> H

Calls expand on each outcome, recursively up to max_depth times. If expand returns a number, it replaces the outcome. If it returns an H object, coalesce is called on the outcome and the expanded histogram, and the returned histogram is folded into result. The default behavior for coalesce is to replace the outcome with the expanded histogram. Returned histograms are always reduced to their lowest terms.

See coalesce_replace and the lowest_terms method.

This method can be used to model complex mechanics. The following models re-rolling a face of 1 on the first roll:

1
2
3
4
5
>>> def reroll_one(h: H, outcome):
...   return h if outcome == 1 else outcome

>>> H(6).substitute(reroll_one)
H({1: 1, 2: 7, 3: 7, 4: 7, 5: 7, 6: 7})

See the explode method for a common shorthand for “exploding” dice (i.e., where, if the greatest face come up, the die is re-rolled and the result is added to a running sum).

In nearly all cases, when a histogram is substituted for an outcome, it takes on the substituted outcome’s “scale”. In other words, the sum of the counts of the replacement retains the same proportion as the replaced outcome in relation to other outcomes. This becomes clearer when there is no overlap between the original histogram and the substitution.

1
2
3
4
5
6
7
>>> orig = H({1: 1, 2: 2, 3: 3, 4: 4})
>>> sub = orig.substitute(lambda h, outcome: -h if outcome == 4 else outcome) ; sub
H({-4: 8, -3: 6, -2: 4, -1: 2, 1: 5, 2: 10, 3: 15})
>>> sum(count for outcome, count in orig.items() if outcome == 4) / orig.total
0.4
>>> sum(count for outcome, count in sub.items() if outcome < 0) / sub.total
0.4

An important exception

If coalesce returns the empty histogram (H({})), the corresponding outcome and its counts are omitted from the result without substitution or scaling. A silly example is modeling a d5 by indefinitely re-rolling a d6 until something other than a 6 comes up.

1
2
>>> H(6).substitute(lambda __, outcome: H({}) if outcome == 6 else outcome)
H({1: 1, 2: 1, 3: 1, 4: 1, 5: 1})

This technique is more useful when modeling re-rolling certain derived outcomes, like ties in a contest.

1
2
3
4
5
>>> d6_3, d8_2 = 3@H(6), 2@H(8)
>>> d6_3.vs(d8_2)
H({-1: 4553, 0: 1153, 1: 8118})
>>> d6_3.vs(d8_2).substitute(lambda __, outcome: H({}) if outcome == 0 else outcome)
H({-1: 4553, 1: 8118})

Because it delegates to a callback for refereeing substitution decisions, substitute is quite flexible and well suited to modeling (or at least approximating) logical progressions with dependent variables. Consider the following mechanic:

  1. Start with a total of zero.
  2. Roll a six-sided die. Add the face to the total. If the face was a six, go to step 3. Otherwise stop.
  3. Roll a four-sided die. Add the face to the total. If the face was a four, go to step 2. Otherwise stop.

What is the likelihood of an even final tally? This can be approximated by:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
>>> d4, d6 = H(4), H(6)

>>> def reroll_greatest_on_d4_d6(h: H, outcome):
...   if outcome == max(h):
...     if h == d6: return d4
...     if h == d4: return d6
...   return outcome

>>> import operator
>>> h = d6.substitute(reroll_greatest_on_d4_d6, operator.__add__, max_depth=6)
>>> h_even = h.is_even()
>>> print(f"{h_even[1] / h_even.total:.3%}")
39.131%

Surprised? Because both six and four are even numbers, the only way we keep rolling is if the total is even. You might think this would lead to evens being more likely. However, we only care about the final tally and the rules direct us to re-roll certain evens (nudging us toward an odd number more often than not).

We can also use this method to model expected damage from a single attack in d20-like role playing games.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
>>> bonus = 1
>>> dmg_dice = H(8)
>>> dmg = dmg_dice + bonus
>>> crit = dmg + dmg_dice
>>> target = 15 - bonus
>>> d20 = H(20)

>>> def dmg_from_attack_roll(h: H, outcome):
...   if outcome == 20:
...     return crit
...   elif outcome >= target:
...     return dmg
...   else:
...     return 0

>>> h = d20.substitute(dmg_from_attack_roll)
>>> print(h.format(scaled=True))
avg |    2.15
std |    3.40
var |   11.55
  0 |  65.00% |##################################################
  2 |   3.75% |##
  3 |   3.83% |##
  4 |   3.91% |###
  5 |   3.98% |###
  6 |   4.06% |###
  7 |   4.14% |###
  8 |   4.22% |###
  9 |   4.30% |###
 10 |   0.62% |
 11 |   0.55% |
 12 |   0.47% |
 13 |   0.39% |
 14 |   0.31% |
 15 |   0.23% |
 16 |   0.16% |
 17 |   0.08% |
Source code in dyce/h.py
@beartype
def substitute(
    self,
    expand: _ExpandT,
    coalesce: _CoalesceT = coalesce_replace,
    max_depth: SupportsIntSCU = 1,
) -> H:
    r"""
    Calls *expand* on each outcome, recursively up to *max_depth* times. If *expand*
    returns a number, it replaces the outcome. If it returns an
    [``H`` object][dyce.h.H], *coalesce* is called on the outcome and the expanded
    histogram, and the returned histogram is folded into result. The default
    behavior for *coalesce* is to replace the outcome with the expanded histogram.
    Returned histograms are always reduced to their lowest terms.

    See [``coalesce_replace``][dyce.h.coalesce_replace] and the
    [``lowest_terms`` method][dyce.h.H.lowest_terms].

    This method can be used to model complex mechanics. The following models
    re-rolling a face of 1 on the first roll:

    ``` python
    >>> def reroll_one(h: H, outcome):
    ...   return h if outcome == 1 else outcome

    >>> H(6).substitute(reroll_one)
    H({1: 1, 2: 7, 3: 7, 4: 7, 5: 7, 6: 7})

    ```

    See the [``explode`` method][dyce.h.H.explode] for a common shorthand for
    “exploding” dice (i.e., where, if the greatest face come up, the die is
    re-rolled and the result is added to a running sum).

    In nearly all cases, when a histogram is substituted for an outcome, it takes on
    the substituted outcome’s “scale”. In other words, the sum of the counts of the
    replacement retains the same proportion as the replaced outcome in relation to
    other outcomes. This becomes clearer when there is no overlap between the
    original histogram and the substitution.

    ``` python
    >>> orig = H({1: 1, 2: 2, 3: 3, 4: 4})
    >>> sub = orig.substitute(lambda h, outcome: -h if outcome == 4 else outcome) ; sub
    H({-4: 8, -3: 6, -2: 4, -1: 2, 1: 5, 2: 10, 3: 15})
    >>> sum(count for outcome, count in orig.items() if outcome == 4) / orig.total
    0.4
    >>> sum(count for outcome, count in sub.items() if outcome < 0) / sub.total
    0.4

    ```

    !!! note "An important exception"

        If *coalesce* returns the empty histogram (``H({})``), the corresponding
        outcome and its counts are omitted from the result without substitution or
        scaling. A silly example is modeling a d5 by indefinitely re-rolling a d6
        until something other than a 6 comes up.

        ``` python
        >>> H(6).substitute(lambda __, outcome: H({}) if outcome == 6 else outcome)
        H({1: 1, 2: 1, 3: 1, 4: 1, 5: 1})

        ```

        This technique is more useful when modeling re-rolling certain derived
        outcomes, like ties in a contest.

        ``` python
        >>> d6_3, d8_2 = 3@H(6), 2@H(8)
        >>> d6_3.vs(d8_2)
        H({-1: 4553, 0: 1153, 1: 8118})
        >>> d6_3.vs(d8_2).substitute(lambda __, outcome: H({}) if outcome == 0 else outcome)
        H({-1: 4553, 1: 8118})

        ```

    Because it delegates to a callback for refereeing substitution decisions,
    ``#!python substitute`` is quite flexible and well suited to modeling (or at
    least approximating) logical progressions with dependent variables. Consider the
    following mechanic:

      1. Start with a total of zero.
      2. Roll a six-sided die. Add the face to the total. If the face was a six, go
         to step 3. Otherwise stop.
      3. Roll a four-sided die. Add the face to the total. If the face was a four,
         go to step 2. Otherwise stop.

    What is the likelihood of an even final tally? This can be approximated by:

    ``` python
    >>> d4, d6 = H(4), H(6)

    >>> def reroll_greatest_on_d4_d6(h: H, outcome):
    ...   if outcome == max(h):
    ...     if h == d6: return d4
    ...     if h == d4: return d6
    ...   return outcome

    >>> import operator
    >>> h = d6.substitute(reroll_greatest_on_d4_d6, operator.__add__, max_depth=6)
    >>> h_even = h.is_even()
    >>> print(f"{h_even[1] / h_even.total:.3%}")
    39.131%

    ```

    Surprised? Because both six and four are even numbers, the only way we keep
    rolling is if the total is even. You might think this would lead to evens being
    *more* likely. However, we only care about the final tally and the rules direct
    us to re-roll certain evens (nudging us toward an odd number more often than
    not).

    We can also use this method to model expected damage from a single attack in
    d20-like role playing games.

    ``` python
    >>> bonus = 1
    >>> dmg_dice = H(8)
    >>> dmg = dmg_dice + bonus
    >>> crit = dmg + dmg_dice
    >>> target = 15 - bonus
    >>> d20 = H(20)

    >>> def dmg_from_attack_roll(h: H, outcome):
    ...   if outcome == 20:
    ...     return crit
    ...   elif outcome >= target:
    ...     return dmg
    ...   else:
    ...     return 0

    >>> h = d20.substitute(dmg_from_attack_roll)
    >>> print(h.format(scaled=True))
    avg |    2.15
    std |    3.40
    var |   11.55
      0 |  65.00% |##################################################
      2 |   3.75% |##
      3 |   3.83% |##
      4 |   3.91% |###
      5 |   3.98% |###
      6 |   4.06% |###
      7 |   4.14% |###
      8 |   4.22% |###
      9 |   4.30% |###
     10 |   0.62% |
     11 |   0.55% |
     12 |   0.47% |
     13 |   0.39% |
     14 |   0.31% |
     15 |   0.23% |
     16 |   0.16% |
     17 |   0.08% |

    ```
    """
    max_depth = as_int(max_depth)

    if max_depth < 0:
        raise ValueError("max_depth cannot be negative")

    def _substitute(h: H, depth: int = 0) -> H:
        assert coalesce is not None

        if depth == max_depth:
            return h

        total_scalar = 1
        items_for_reassembly: List[Tuple[RealLikeSCU, int, int]] = []

        for outcome, count in h.items():
            expanded = expand(h, outcome)

            if isinstance(expanded, H):
                # Keep expanding deeper, if we can
                expanded = _substitute(expanded, depth + 1)
                # Coalesce the result
                expanded = coalesce(expanded, outcome)
                # Account for the impact of expansion on peers
                expanded_scalar = expanded.total

                if expanded_scalar:
                    total_scalar *= expanded_scalar
                    # Account for the impact of the original count on the result, but
                    # keep track of the impact on peers so we can factor it out for
                    # these items later
                    items_for_reassembly.extend(
                        (expanded_outcome, expanded_count * count, expanded_scalar)
                        for expanded_outcome, expanded_count in expanded.items()
                    )
            else:
                items_for_reassembly.append((expanded, count, 1))

        return type(self)(
            (
                # Apply the total_scalar, but factor out this item's contribution
                (outcome, count * total_scalar // scalar)
                for outcome, count, scalar in items_for_reassembly
            )
        ).lowest_terms()

    return _substitute(self)

umap(self, un_op: _UnaryOperatorT) -> H

Applies un_op to each outcome of the histogram.

1
2
3
>>> import operator
>>> H(6).umap(operator.__neg__)
H(-6)
1
2
>>> H(4).umap(lambda outcome: (-outcome) ** outcome)
H({-27: 1, -1: 1, 4: 1, 256: 1})
Source code in dyce/h.py
@beartype
def umap(
    self,
    un_op: _UnaryOperatorT,
) -> H:
    r"""
    Applies *un_op* to each outcome of the histogram.

    ``` python
    >>> import operator
    >>> H(6).umap(operator.__neg__)
    H(-6)

    ```

    ``` python
    >>> H(4).umap(lambda outcome: (-outcome) ** outcome)
    H({-27: 1, -1: 1, 4: 1, 256: 1})

    ```
    """
    h = type(self)((un_op(outcome), count) for outcome, count in self.items())

    if self._simple_init is not None:
        simple_init = un_op(self._simple_init)

        if isinstance(simple_init, SupportsInt):
            h_simple = type(self)(simple_init)

            if h_simple == h:
                return h_simple

    return h

values(self) -> ValuesView[int]

D.values() -> an object providing a view on D's values

Source code in dyce/h.py
@beartype
def values(self) -> ValuesView[int]:
    return self.counts()

variance(self, mu: Optional[RealLikeSCU] = None) -> RealLikeSCU

Returns the variance of the weighted outcomes. If provided, mu is used as the mean (to avoid duplicate computation).

Source code in dyce/h.py
@beartype
def variance(self, mu: Optional[RealLikeSCU] = None) -> RealLikeSCU:
    r"""
    Returns the variance of the weighted outcomes. If provided, *mu* is used as the mean
    (to avoid duplicate computation).
    """
    mu = mu if mu else self.mean()
    numerator: float
    denominator: float
    numerator = denominator = 0

    for outcome, count in self.items():
        numerator += (outcome - mu) ** 2 * count
        denominator += count

    return numerator / (denominator or 1)

vs(self, other: _OperandT) -> H

Compares the histogram with other. -1 represents where other is greater. 0 represents where they are equal. 1 represents where other is less.

Shorthand for self.within(0, 0, other).

1
2
3
4
>>> H(6).vs(H(4))
H({-1: 6, 0: 4, 1: 14})
>>> H(6).vs(H(4)) == H(6).within(0, 0, H(4))
True

See the within method.

Source code in dyce/h.py
@beartype
def vs(self, other: _OperandT) -> H:
    r"""
    Compares the histogram with *other*. -1 represents where *other* is greater. 0
    represents where they are equal. 1 represents where *other* is less.

    Shorthand for ``#!python self.within(0, 0, other)``.

    ``` python
    >>> H(6).vs(H(4))
    H({-1: 6, 0: 4, 1: 14})
    >>> H(6).vs(H(4)) == H(6).within(0, 0, H(4))
    True

    ```

    See the [``within`` method][dyce.h.H.within].
    """
    return self.within(0, 0, other)

within(self, lo: RealLikeSCU, hi: RealLikeSCU, other: _OperandT = 0) -> H

Computes the difference between the histogram and other. -1 represents where that difference is less than lo. 0 represents where that difference between lo and hi (inclusive). 1 represents where that difference is greater than hi.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
>>> d6_2 = 2@H(6)
>>> d6_2.within(7, 9)
H({-1: 15, 0: 15, 1: 6})
>>> print(d6_2.within(7, 9).format())
avg |   -0.25
std |    0.72
var |    0.52
 -1 |  41.67% |####################
  0 |  41.67% |####################
  1 |  16.67% |########
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
>>> d6_3, d8_2 = 3@H(6), 2@H(8)
>>> d6_3.within(-1, 1, d8_2)  # 3d6 w/in 1 of 2d8
H({-1: 3500, 0: 3412, 1: 6912})
>>> print(d6_3.within(-1, 1, d8_2).format())
avg |    0.25
std |    0.83
var |    0.69
 -1 |  25.32% |############
  0 |  24.68% |############
  1 |  50.00% |#########################
Source code in dyce/h.py
@beartype
def within(self, lo: RealLikeSCU, hi: RealLikeSCU, other: _OperandT = 0) -> H:
    r"""
    Computes the difference between the histogram and *other*. -1 represents where that
    difference is less than *lo*. 0 represents where that difference between *lo*
    and *hi* (inclusive). 1 represents where that difference is greater than *hi*.

    ``` python
    >>> d6_2 = 2@H(6)
    >>> d6_2.within(7, 9)
    H({-1: 15, 0: 15, 1: 6})
    >>> print(d6_2.within(7, 9).format())
    avg |   -0.25
    std |    0.72
    var |    0.52
     -1 |  41.67% |####################
      0 |  41.67% |####################
      1 |  16.67% |########

    ```

    ``` python
    >>> d6_3, d8_2 = 3@H(6), 2@H(8)
    >>> d6_3.within(-1, 1, d8_2)  # 3d6 w/in 1 of 2d8
    H({-1: 3500, 0: 3412, 1: 6912})
    >>> print(d6_3.within(-1, 1, d8_2).format())
    avg |    0.25
    std |    0.83
    var |    0.69
     -1 |  25.32% |############
      0 |  24.68% |############
      1 |  50.00% |#########################

    ```
    """
    return self.map(_within(lo, hi), other)

P (Sequence, Generic, HableOpsMixin)

An immutable pool (ordered sequence) supporting group operations for zero or more H objects (provided or created from the initializer’s args parameter).

1
2
3
>>> from dyce import P
>>> p_d6 = P(6) ; p_d6  # shorthand for P(H(6))
P(6)
1
2
3
4
5
6
>>> P(p_d6, p_d6)  # 2d6
P(6, 6)
>>> 2@p_d6  # also 2d6
P(6, 6)
>>> 2@(2@p_d6) == 4@p_d6
True
1
2
3
4
>>> p = P(4, P(6, P(8, P(10, P(12, P(20)))))) ; p
P(4, 6, 8, 10, 12, 20)
>>> sum(p.roll()) in p.h()
True

This class implements the HableT protocol and derives from the HableOpsMixin class, which means it can be “flattened” into a single histogram, either explicitly via the h method, or implicitly by using arithmetic operations.

1
2
>>> -p_d6
H({-6: 1, -5: 1, -4: 1, -3: 1, -2: 1, -1: 1})
1
2
>>> p_d6 + p_d6
H({2: 1, 3: 2, 4: 3, 5: 4, 6: 5, 7: 6, 8: 5, 9: 4, 10: 3, 11: 2, 12: 1})
1
2
>>> 2 * P(8) - 1
H({1: 1, 3: 1, 5: 1, 7: 1, 9: 1, 11: 1, 13: 1, 15: 1})

To perform arithmetic on individual H objects in a pool without flattening, use the map, rmap, and umap methods.

1
2
3
>>> import operator
>>> P(4, 6, 8).umap(operator.__neg__)
P(-8, -6, -4)
1
2
>>> P(4, 6).map(operator.__pow__, 2)
P(H({1: 1, 4: 1, 9: 1, 16: 1}), H({1: 1, 4: 1, 9: 1, 16: 1, 25: 1, 36: 1}))
1
2
>>> P(4, 6).rmap(2, operator.__pow__)
P(H({2: 1, 4: 1, 8: 1, 16: 1}), H({2: 1, 4: 1, 8: 1, 16: 1, 32: 1, 64: 1}))

Comparisons with H objects work as expected.

1
2
3
>>> from dyce import H
>>> 3@p_d6 == H(6) + H(6) + H(6)
True

Indexing selects a contained histogram.

1
2
>>> P(4, 6, 8)[0]
H(4)

Note that pools are opinionated about ordering.

1
2
3
4
>>> P(8, 6, 4)
P(4, 6, 8)
>>> P(8, 6, 4)[0] == P(8, 4, 6)[0] == H(4)
True

In an extension to (departure from) the HableT protocol, the h method’s implementation also affords subsets of outcomes to be “taken” (selected) by passing in selection criteria. Values are indexed from least to greatest. Negative indexes are supported and retain their idiomatic meaning. Modeling the sum of the greatest two faces of three six-sided dice (3d6) can be expressed as:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
>>> p_3d6 = 3@p_d6
>>> p_3d6.h(-2, -1)
H({2: 1, 3: 3, 4: 7, 5: 12, 6: 19, 7: 27, 8: 34, 9: 36, 10: 34, 11: 27, 12: 16})
>>> print(p_3d6.h(-2, -1).format())
avg |    8.46
std |    2.21
var |    4.91
  2 |   0.46% |
  3 |   1.39% |
  4 |   3.24% |#
  5 |   5.56% |##
  6 |   8.80% |####
  7 |  12.50% |######
  8 |  15.74% |#######
  9 |  16.67% |########
 10 |  15.74% |#######
 11 |  12.50% |######
 12 |   7.41% |###

__slots__: Union[str, Iterable[str]] special

is_homogeneous: bool property readonly

Experimental

This property should be considered experimental and may change or disappear in future versions.

A flag indicating whether the pool’s population of histograms is homogeneous.

1
2
3
4
>>> P(6, 6).is_homogeneous
True
>>> P(4, 6, 8).is_homogeneous
False

__eq__(self, other) -> bool special

Source code in dyce/p.py
@beartype
def __eq__(self, other) -> bool:
    if isinstance(other, P):
        return __eq__(self._hs, other._hs)
    else:
        return NotImplemented

__getitem__(self, key: _GetItemT) -> Union[H, 'P'] special

Source code in dyce/p.py
@beartype
# TODO(posita): See <https://github.com/python/mypy/issues/8393>
# TODO(posita): See <https://github.com/beartype/beartype/issues/39#issuecomment-871914114> et seq.
def __getitem__(self, key: _GetItemT) -> Union[H, "P"]:  # type: ignore [override]
    if isinstance(key, slice):
        return P(*self._hs[key])
    else:
        return self._hs[__index__(key)]

__init__(self, *args: Union[SupportsIntSCU, 'P', H]) -> None special

Initializer.

Source code in dyce/p.py
@beartype
def __init__(self, *args: Union[SupportsIntSCU, "P", H]) -> None:
    r"Initializer."
    super().__init__()

    def _gen_hs() -> Iterator[H]:
        for a in args:
            if isinstance(a, H):
                yield a
            elif isinstance(a, P):
                for h in a._hs:
                    yield h
            elif isinstance(a, SupportsInt):
                yield H(a)
            else:
                raise ValueError(f"unrecognized initializer {args}")

    hs = list(h for h in _gen_hs() if h)

    try:
        hs.sort(key=lambda h: tuple(h.items()))
    except TypeError:
        # This is for outcomes that don't support direct comparisons, like symbolic
        # representations
        hs.sort(key=lambda h: str(tuple(h.items())))

    self._hs = tuple(hs)

__iter__(self) -> Iterator[H] special

Source code in dyce/p.py
@beartype
def __iter__(self) -> Iterator[H]:
    return iter(self._hs)

__len__(self) -> int special

Source code in dyce/p.py
@beartype
def __len__(self) -> int:
    return len(self._hs)

__matmul__(self, other: SupportsIntSCU) -> P special

Source code in dyce/p.py
@beartype
def __matmul__(self, other: SupportsIntSCU) -> P:
    try:
        other = as_int(other)
    except TypeError:
        return NotImplemented

    if other < 0:
        raise ValueError("argument cannot be negative")
    else:
        return P(*chain.from_iterable(repeat(self, other)))

__ne__(self, other) -> bool special

Source code in dyce/p.py
@beartype
def __ne__(self, other) -> bool:
    if isinstance(other, P):
        return __ne__(self._hs, other._hs)
    else:
        return NotImplemented

__repr__(self) -> str special

Source code in dyce/p.py
@beartype
def __repr__(self) -> str:
    def _parts() -> Iterator[str]:
        for h in self:
            yield (str(h._simple_init) if h._simple_init is not None else repr(h))

    args = ", ".join(_parts())

    return f"{type(self).__name__}({args})"

__rmatmul__(self, other: SupportsIntSCU) -> P special

Source code in dyce/p.py
@beartype
def __rmatmul__(self, other: SupportsIntSCU) -> P:
    return self.__matmul__(other)

appearances_in_rolls(self, outcome: RealLikeSCU) -> H

Experimental

This method should be considered experimental and may change or disappear in future versions. While it does provide a performance improvement over other techniques, it is not significant for most applications, and rarely justifies the corresponding reduction in readability.

Returns a histogram where the outcomes (keys) are the number of times outcome appears, and the counts are the number of rolls where outcome appears precisely that number of times. Equivalent to H((sum(1 for v in rollif v == outcome), count) for roll, count in self.rolls_with_counts()), but much more efficient.

1
2
3
4
5
>>> p_2d6 = P(6, 6)
>>> list(p_2d6.rolls_with_counts())
[((1, 1), 1), ((1, 2), 2), ((1, 3), 2), ((1, 4), 2), ((1, 5), 2), ((1, 6), 2), ...]
>>> p_2d6.appearances_in_rolls(1)
H({0: 25, 1: 10, 2: 1})
1
2
3
4
5
>>> # Least efficient, by far
>>> d4, d6 = H(4), H(6)
>>> p_3d4_2d6 = P(d4, d4, d4, d6, d6)
>>> H((sum(1 for v in roll if v == 3), count) for roll, count in p_3d4_2d6.rolls_with_counts())
H({0: 675, 1: 945, 2: 522, 3: 142, 4: 19, 5: 1})
1
2
3
4
5
>>> # Pretty darned efficient, generalizable to other boolean inquiries, and
>>> # arguably the most readable
>>> d4_eq3, d6_eq3 = d4.eq(2), d6.eq(2)
>>> 3@d4_eq3 + 2@d6_eq3
H({0: 675, 1: 945, 2: 522, 3: 142, 4: 19, 5: 1})
1
2
3
>>> # Most efficient for large sets of dice
>>> p_3d4_2d6.appearances_in_rolls(3)
H({0: 675, 1: 945, 2: 522, 3: 142, 4: 19, 5: 1})

Based on some rudimentary testing, this method appears to converge on being about twice as fast as the boolean accumulation technique for larger sets.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
%timeit 3@d4_eq3 + 2@d6_eq3
397 µs ± 17.5 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)

%timeit P(3@P(4), 2@P(6)).appearances_in_rolls(3)
653 µs ± 23.9 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)

%timeit 9@d4_eq3 + 6@d6_eq3
1.39 ms ± 53.9 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)

%timeit P(9@P(4), 6@P(6)).appearances_in_rolls(3)
1.12 ms ± 35.3 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)

%timeit 27@d4_eq3 + 18@d6_eq3
7.14 ms ± 223 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

%timeit P(27@P(4), 18@P(6)).appearances_in_rolls(3)
3.32 ms ± 109 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

%timeit 81@d4_eq3 + 54@d6_eq3
46.7 ms ± 1.01 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)

%timeit P(81@P(4), 54@P(6)).appearances_in_rolls(3)
17.1 ms ± 416 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
Source: perf_appearances_in_rolls.ipy
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
from dyce import H, P

p_2d6 = P(6, 6)
d4, d6 = H(4), H(6)
p_3d4_2d6 = P(d4, d4, d4, d6, d6)
d4_eq3, d6_eq3 = d4.eq(2), d6.eq(2)

print(f"%timeit 3@d4_eq3 + 2@d6_eq3")
%timeit 3@d4_eq3 + 2@d6_eq3
print()

print(f"%timeit P(3@P(4), 2@P(6)).appearances_in_rolls(3)")
%timeit P(3@P(4), 2@P(6)).appearances_in_rolls(3)
print()

print(f"%timeit 9@d4_eq3 + 6@d6_eq3")
%timeit 9@d4_eq3 + 6@d6_eq3
print()

print(f"%timeit P(9@P(4), 6@P(6)).appearances_in_rolls(3)")
%timeit P(9@P(4), 6@P(6)).appearances_in_rolls(3)
print()

print(f"%timeit 27@d4_eq3 + 18@d6_eq3")
%timeit 27@d4_eq3 + 18@d6_eq3
print()

print(f"%timeit P(27@P(4), 18@P(6)).appearances_in_rolls(3)")
%timeit P(27@P(4), 18@P(6)).appearances_in_rolls(3)
print()

print(f"%timeit 81@d4_eq3 + 54@d6_eq3")
%timeit 81@d4_eq3 + 54@d6_eq3
print()

print(f"%timeit P(81@P(4), 54@P(6)).appearances_in_rolls(3)")
%timeit P(81@P(4), 54@P(6)).appearances_in_rolls(3)
print()
Source code in dyce/p.py
@experimental
@beartype
def appearances_in_rolls(self, outcome: RealLikeSCU) -> H:
    r"""
    !!! warning "Experimental"

        This method should be considered experimental and may change or disappear in
        future versions. While it does provide a performance improvement over other
        techniques, it is not significant for most applications, and rarely
        justifies the corresponding reduction in readability.

    Returns a histogram where the outcomes (keys) are the number of times *outcome*
    appears, and the counts are the number of rolls where *outcome* appears
    precisely that number of times. Equivalent to ``#!python H((sum(1 for v in roll
    if v == outcome), count) for roll, count in self.rolls_with_counts())``, but
    much more efficient.

    ``` python
    >>> p_2d6 = P(6, 6)
    >>> list(p_2d6.rolls_with_counts())
    [((1, 1), 1), ((1, 2), 2), ((1, 3), 2), ((1, 4), 2), ((1, 5), 2), ((1, 6), 2), ...]
    >>> p_2d6.appearances_in_rolls(1)
    H({0: 25, 1: 10, 2: 1})

    ```

    ``` python
    >>> # Least efficient, by far
    >>> d4, d6 = H(4), H(6)
    >>> p_3d4_2d6 = P(d4, d4, d4, d6, d6)
    >>> H((sum(1 for v in roll if v == 3), count) for roll, count in p_3d4_2d6.rolls_with_counts())
    H({0: 675, 1: 945, 2: 522, 3: 142, 4: 19, 5: 1})

    ```

    ``` python
    >>> # Pretty darned efficient, generalizable to other boolean inquiries, and
    >>> # arguably the most readable
    >>> d4_eq3, d6_eq3 = d4.eq(2), d6.eq(2)
    >>> 3@d4_eq3 + 2@d6_eq3
    H({0: 675, 1: 945, 2: 522, 3: 142, 4: 19, 5: 1})

    ```

    ``` python
    >>> # Most efficient for large sets of dice
    >>> p_3d4_2d6.appearances_in_rolls(3)
    H({0: 675, 1: 945, 2: 522, 3: 142, 4: 19, 5: 1})

    ```

    Based on some rudimentary testing, this method appears to converge on being
    about twice as fast as the boolean accumulation technique for larger sets.

    ``` python
    --8<-- "docs/assets/perf_appearances_in_rolls.txt"
    ```

    <details>
    <summary>Source: <a href="https://github.com/posita/dyce/blob/latest/docs/assets/perf_appearances_in_rolls.ipy"><code>perf_appearances_in_rolls.ipy</code></a></summary>

    ``` python
    --8<-- "docs/assets/perf_appearances_in_rolls.ipy"
    ```
    </details>
    """
    group_counters: List[Counter[RealLikeSCU]] = []

    for h, hs in groupby(self):
        group_counter: Counter[RealLikeSCU] = counter()
        n = sum(1 for _ in hs)

        for k in range(0, n + 1):
            group_counter[k] = h.exactly_k_times_in_n(outcome, n, k) * (
                group_counter[k] if group_counter[k] else 1
            )

        group_counters.append(group_counter)

    return sum_h(H(group_counter) for group_counter in group_counters)

h(self, *which: _GetItemT) -> H

Roughly equivalent to H((sum(roll), count) for roll, count inself.rolls_with_counts(*which)) with some short-circuit optimizations.

When provided no arguments, h combines (or “flattens”) contained histograms in accordance with the HableT protocol.

1
2
>>> (2@P(6)).h()
H({2: 1, 3: 2, 4: 3, 5: 4, 6: 5, 7: 6, 8: 5, 9: 4, 10: 3, 11: 2, 12: 1})

If one or more arguments are provided, this method sums subsets of outcomes those arguments identify for each roll. Outcomes are ordered from least (index 0) to greatest (index -1 or len(self) -1). Identifiers can be ints or slices, and can be mixed.

Taking the greatest of two six-sided dice can be modeled as:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
>>> p_2d6 = 2@P(6)
>>> p_2d6.h(-1)
H({1: 1, 2: 3, 3: 5, 4: 7, 5: 9, 6: 11})
>>> print(p_2d6.h(-1).format())
avg |    4.47
std |    1.40
var |    1.97
  1 |   2.78% |#
  2 |   8.33% |####
  3 |  13.89% |######
  4 |  19.44% |#########
  5 |  25.00% |############
  6 |  30.56% |###############

Taking the greatest two and least two faces of ten four-sided dice (10d4) can be modeled as:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
>>> p_10d4 = 10@P(4)
>>> p_10d4.h(slice(2), slice(-2, None))
H({4: 1, 5: 10, 6: 1012, 7: 5030, 8: 51973, 9: 168760, 10: 595004, 11: 168760, 12: 51973, 13: 5030, 14: 1012, 15: 10, 16: 1})
>>> print(p_10d4.h(slice(2), slice(-2, None)).format(scaled=True))
avg |   10.00
std |    0.91
var |    0.84
  4 |   0.00% |
  5 |   0.00% |
  6 |   0.10% |
  7 |   0.48% |
  8 |   4.96% |####
  9 |  16.09% |##############
 10 |  56.74% |##################################################
 11 |  16.09% |##############
 12 |   4.96% |####
 13 |   0.48% |
 14 |   0.10% |
 15 |   0.00% |
 16 |   0.00% |

Taking all outcomes exactly once is equivalent to summing the histograms in the pool.

1
2
3
4
5
>>> d6 = H(6)
>>> d6avg = H((2, 3, 3, 4, 4, 5))
>>> p = 2@P(d6, d6avg)
>>> p.h(slice(None)) == p.h() == d6 + d6 + d6avg + d6avg
True
Source code in dyce/p.py
@beartype
def h(self, *which: _GetItemT) -> H:
    r"""
    Roughly equivalent to ``#!python H((sum(roll), count) for roll, count in
    self.rolls_with_counts(*which))`` with some short-circuit optimizations.

    When provided no arguments, ``#!python h`` combines (or “flattens”) contained
    histograms in accordance with the [``HableT`` protocol][dyce.h.HableT].

    ``` python
    >>> (2@P(6)).h()
    H({2: 1, 3: 2, 4: 3, 5: 4, 6: 5, 7: 6, 8: 5, 9: 4, 10: 3, 11: 2, 12: 1})

    ```

    If one or more arguments are provided, this method sums subsets of outcomes
    those arguments identify for each roll. Outcomes are ordered from least (index
    ``#!python 0``) to greatest (index ``#!python -1`` or ``#!python len(self) -
    1``). Identifiers can be ``#!python int``s or ``#!python slice``s, and can be
    mixed.

    Taking the greatest of two six-sided dice can be modeled as:

    ``` python
    >>> p_2d6 = 2@P(6)
    >>> p_2d6.h(-1)
    H({1: 1, 2: 3, 3: 5, 4: 7, 5: 9, 6: 11})
    >>> print(p_2d6.h(-1).format())
    avg |    4.47
    std |    1.40
    var |    1.97
      1 |   2.78% |#
      2 |   8.33% |####
      3 |  13.89% |######
      4 |  19.44% |#########
      5 |  25.00% |############
      6 |  30.56% |###############

    ```

    Taking the greatest two and least two faces of ten four-sided dice (``10d4``)
    can be modeled as:

    ``` python
    >>> p_10d4 = 10@P(4)
    >>> p_10d4.h(slice(2), slice(-2, None))
    H({4: 1, 5: 10, 6: 1012, 7: 5030, 8: 51973, 9: 168760, 10: 595004, 11: 168760, 12: 51973, 13: 5030, 14: 1012, 15: 10, 16: 1})
    >>> print(p_10d4.h(slice(2), slice(-2, None)).format(scaled=True))
    avg |   10.00
    std |    0.91
    var |    0.84
      4 |   0.00% |
      5 |   0.00% |
      6 |   0.10% |
      7 |   0.48% |
      8 |   4.96% |####
      9 |  16.09% |##############
     10 |  56.74% |##################################################
     11 |  16.09% |##############
     12 |   4.96% |####
     13 |   0.48% |
     14 |   0.10% |
     15 |   0.00% |
     16 |   0.00% |

    ```

    Taking all outcomes exactly once is equivalent to summing the histograms in the
    pool.

    ``` python
    >>> d6 = H(6)
    >>> d6avg = H((2, 3, 3, 4, 4, 5))
    >>> p = 2@P(d6, d6avg)
    >>> p.h(slice(None)) == p.h() == d6 + d6 + d6avg + d6avg
    True

    ```
    """
    if which:
        n = len(self)
        i = _analyze_selection(n, which)

        if i and i >= n:
            # The caller selected all dice in the pool exactly i // n times, so we
            # can short-circuit roll enumeration
            assert i % n == 0

            return self.h() * (i // n)
        else:
            return H(
                (sum(roll), count) for roll, count in self.rolls_with_counts(*which)
            )
    else:
        # The caller offered no selection
        return sum_h(self)

map(self, op: _BinaryOperatorT, right_operand: _OperandT) -> P

Shorthand for P(*(h.map(op, right_operand) for h in self)). See the H.map method.

1
2
3
4
>>> import operator
>>> p_3d6 = 3@P(H((-3, -1, 2, 4)))
>>> p_3d6.map(operator.__mul__, -1)
P(H({-4: 1, -2: 1, 1: 1, 3: 1}), H({-4: 1, -2: 1, 1: 1, 3: 1}), H({-4: 1, -2: 1, 1: 1, 3: 1}))
Source code in dyce/p.py
@beartype
def map(
    self,
    op: _BinaryOperatorT,
    right_operand: _OperandT,
) -> P:
    r"""
    Shorthand for ``#!python P(*(h.map(op, right_operand) for h in self))``. See the
    [``H.map`` method][dyce.h.H.map].

    ``` python
    >>> import operator
    >>> p_3d6 = 3@P(H((-3, -1, 2, 4)))
    >>> p_3d6.map(operator.__mul__, -1)
    P(H({-4: 1, -2: 1, 1: 1, 3: 1}), H({-4: 1, -2: 1, 1: 1, 3: 1}), H({-4: 1, -2: 1, 1: 1, 3: 1}))

    ```
    """
    return P(*(h.map(op, right_operand) for h in self))

rmap(self, left_operand: RealLikeSCU, op: _BinaryOperatorT) -> P

Shorthand for P(*(h.rmap(left_operand, op) for h in self)). See the H.rmap method.

1
2
3
4
5
>>> import operator
>>> from fractions import Fraction
>>> p_3d6 = 2@P(H((-3, -1, 2, 4)))
>>> p_3d6.umap(Fraction).rmap(1, operator.__truediv__)
P(H({Fraction(-1, 1): 1, Fraction(-1, 3): 1, Fraction(1, 4): 1, Fraction(1, 2): 1}), H({Fraction(-1, 1): 1, Fraction(-1, 3): 1, Fraction(1, 4): 1, Fraction(1, 2): 1}))
Source code in dyce/p.py
@beartype
def rmap(
    self,
    left_operand: RealLikeSCU,
    op: _BinaryOperatorT,
) -> P:
    r"""
    Shorthand for ``#!python P(*(h.rmap(left_operand, op) for h in self))``. See the
    [``H.rmap`` method][dyce.h.H.rmap].

    ``` python
    >>> import operator
    >>> from fractions import Fraction
    >>> p_3d6 = 2@P(H((-3, -1, 2, 4)))
    >>> p_3d6.umap(Fraction).rmap(1, operator.__truediv__)
    P(H({Fraction(-1, 1): 1, Fraction(-1, 3): 1, Fraction(1, 4): 1, Fraction(1, 2): 1}), H({Fraction(-1, 1): 1, Fraction(-1, 3): 1, Fraction(1, 4): 1, Fraction(1, 2): 1}))

    ```
    """
    return P(*(h.rmap(left_operand, op) for h in self))

roll(self) -> _RollT

Returns (weighted) random outcomes from contained histograms.

On ordering

This method “works” (i.e., falls back to a “natural” ordering of string representations) for outcomes whose relative values cannot be known (e.g., symbolic expressions). This is deliberate to allow random roll functionality where symbolic resolution is not needed or will happen later.

Source code in dyce/p.py
@beartype
def roll(self) -> _RollT:
    r"""
    Returns (weighted) random outcomes from contained histograms.

    !!! note "On ordering"

        This method “works” (i.e., falls back to a “natural” ordering of string
        representations) for outcomes whose relative values cannot be known (e.g.,
        symbolic expressions). This is deliberate to allow random roll functionality
        where symbolic resolution is not needed or will happen later.
    """
    return tuple(sorted_outcomes(h.roll() for h in self))

rolls_with_counts(self, *which: _GetItemT) -> Iterator[_RollCountT]

Returns an iterator yielding 2-tuples (pairs) that, collectively, enumerate all possible outcomes for the pool. The first item in the 2-tuple is a sorted sequence of the outcomes for a distinct roll. The second is the count for that roll. Outcomes in each roll are ordered least to greatest.

If one or more arguments are provided, this methods selects subsets of outcomes for each roll. Outcomes in each roll are ordered from least (index 0) to greatest (index -1 or len(self) - 1). Identifiers can be ints or slices, and can be mixed for more flexible selections.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
>>> from collections import Counter
>>> def accumulate_roll_counts(counter, roll_counts):
...   for roll, count in roll_counts:
...     counter[roll] += count
...   return counter
>>> p_6d6 = 6@P(6)
>>> every_other_d6 = accumulate_roll_counts(Counter(), p_6d6.rolls_with_counts(slice(None, None, -2))) ; every_other_d6
Counter({(6, 4, 2): 4110, (6, 5, 3): 3390, (6, 4, 3): 3330, ..., (3, 3, 3): 13, (2, 2, 2): 7, (1, 1, 1): 1})
>>> accumulate_roll_counts(Counter(), p_6d6.rolls_with_counts(5, 3, 1)) == every_other_d6
True
>>> accumulate_roll_counts(Counter(), p_6d6.rolls_with_counts(*range(5, 0, -2))) == every_other_d6
True
>>> accumulate_roll_counts(Counter(), p_6d6.rolls_with_counts(*(i for i in range(6, 0, -1) if i % 2 == 1))) == every_other_d6
True

One way to model the likelihood of achieving a “Yhatzee” (i.e., where five six-sided dice show the same face) on a single roll by checking rolls for where the least and greatest outcomes are the same.

1
2
3
4
5
6
7
8
>>> p_5d6 = 5@P(6)
>>> yhatzee_on_single_roll = H(
...   (1 if roll[0] == roll[-1] else 0, count)
...   for roll, count
...   in p_5d6.rolls_with_counts()
... )
>>> print(yhatzee_on_single_roll.format(width=0))
{..., 0: 99.92%, 1:  0.08%}

In the general case, rolls may appear more than once.

1
2
>>> list(P(H(2), H(3)).rolls_with_counts())
[((1, 1), 1), ((1, 2), 1), ((1, 3), 1), ((1, 2), 1), ((2, 2), 1), ((2, 3), 1)]

In the above, (1, 2) appears a total of two times, each with counts of one.

However, if the pool is homogeneous (meaning it only contains identical histograms), rolls (before selection) are not repeated. (See the note on implementation below.)

1
2
>>> list((2@P(H((-1, 0, 1)))).rolls_with_counts())
[((-1, -1), 1), ((-1, 0), 2), ((-1, 1), 2), ((0, 0), 1), ((0, 1), 2), ((1, 1), 1)]

Either way, by summing and counting all rolls, we can confirm identity.

1
2
3
4
5
>>> d6 = H(6)
>>> d6avg = H((2, 3, 3, 4, 4, 5))
>>> p = 2@P(d6, d6avg)
>>> H((sum(roll), count) for roll, count in p.rolls_with_counts()) == p.h() == d6 + d6 + d6avg + d6avg
True

This method does not try to outsmart callers by (mis)interpreting selection arguments. It honors selection identifier order and any redundancy.

1
2
3
4
>>> p_d3_d4 = P(H(3), H(4))
>>> # Select the second, first, then second (again) elements
>>> list(p_d3_d4.rolls_with_counts(-1, 0, 1))
[((1, 1, 1), 1), ((2, 1, 2), 1), ((3, 1, 3), 1), ((4, 1, 4), 1), ..., ((3, 1, 3), 1), ((3, 2, 3), 1), ((3, 3, 3), 1), ((4, 3, 4), 1)]

Selecting the same outcomes, but in a different order is not immediately comparable.

1
2
3
4
>>> select_0_1 = list(p_d3_d4.rolls_with_counts(0, 1))
>>> select_1_0 = list(p_d3_d4.rolls_with_counts(1, 0))
>>> select_0_1 == select_1_0
False

Equivalence can be tested when selected outcomes are sorted.

1
2
3
4
>>> sorted_0_1 = [(sorted(roll), count) for roll, count in select_0_1]
>>> sorted_1_0 = [(sorted(roll), count) for roll, count in select_1_0]
>>> sorted_0_1 == sorted_1_0
True

They can also be summed and counted which is equivalent to calling the h method with identical selection arguments.

1
2
3
4
>>> summed_0_1 = H((sum(roll), count) for roll, count in select_0_1)
>>> summed_1_0 = H((sum(roll), count) for roll, count in select_1_0)
>>> summed_0_1 == summed_1_0 == p_d3_d4.h(0, 1) == p_d3_d4.h(1, 0)
True

About the implementation

Enumeration is substantially more efficient for homogeneous pools than heterogeneous ones, because we are able to avoid the expensive enumeration of the Cartesian product using several techniques.

Taking \(k\) outcomes, where \(k\) selects fewer than all \(n\) outcomes a homogeneous pool benefits from Ilmari Karonen’s optimization, which appears to scale geometrically with \(k\) times some factor of \(n\) (e.g., \(\log n\), but I haven’t bothered to figure that out yet), such that—in observed testing, at least—it is generally the fastest approach for \(k < n\).

Where \(k = n\), we leverage the multinomial coefficient, which appears to scale generally with \(n\).

\[ {{n} \choose {{{k}_{1}},{{k}_{2}},\ldots,{{k}_{m}}}} = {\frac {{n}!} {{{k}_{1}}! {{k}_{2}}! \cdots {{k}_{m}}!}} \]

We enumerate combinations with replacements, and then the compute the number of permutations with repetitions for each combination. Consider n@P(H(m)). Enumerating combinations with replacements would yield all unique rolls.

((1, 1, , 1), (1, 1, , 2), , (1, 1, , m), , (m - 1, m, ,m), (m, m, , m))

To determine the count for a particular roll (a, b, , n), we compute the multinomial coefficient for that roll and multiply by the scalar H(m)[a] * H(m)[b] * * H(m)[n]. (See this for an in-depth exploration of the topic.)

Further, this implementation attempts to optimize heterogeneous pools by breaking them into homogeneous groups before computing the Cartesian product of those sub-results. This approach allows homogeneous pools to be ordered without duplicates, where heterogeneous ones offer no such guarantees.

As expected, this optimization allows the performance of arbitrary selection from mixed pools to sit between that of purely homogeneous and purely heterogeneous ones. Note, however, that all three appear to scale geometrically in some way.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
(P(6, 6, 6, 6)).h(slice(0)):
22.7 µs ± 463 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
(P(6, 6, 6, 6)).h(slice(1)):
740 µs ± 16.4 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
(P(6, 6, 6, 6)).h(slice(2)):
2.25 ms ± 97.7 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
(P(6, 6, 6, 6)).h(slice(3)):
4.44 ms ± 219 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
(P(6, 6, 6, 6, 6, 6)).h(slice(2)):
2.12 ms ± 26 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
(P(6, 6, 6, 6, 6, 6)).h(slice(3)):
4.46 ms ± 75.5 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
(P(6, 6, 6, 6, 6, 6)).h(slice(4)):
7.89 ms ± 186 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
(P(6, 6, 6, 6, 6, 6)).h(slice(5)):
12.7 ms ± 287 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

(P(H({-1: 1, 0: 1, 1: 1, 2: 1, 3: 1, 4: 1}), H({0: 1, 1: 1, 2: 1, 3: 1, 4: 1, 5: 1}), 6, 6)).h(slice(0)):
22.6 µs ± 820 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
(P(H({-1: 1, 0: 1, 1: 1, 2: 1, 3: 1, 4: 1}), H({0: 1, 1: 1, 2: 1, 3: 1, 4: 1, 5: 1}), 6, 6)).h(slice(1)):
2.51 ms ± 102 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
(P(H({-1: 1, 0: 1, 1: 1, 2: 1, 3: 1, 4: 1}), H({0: 1, 1: 1, 2: 1, 3: 1, 4: 1, 5: 1}), 6, 6)).h(slice(2)):
5.31 ms ± 63 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
(P(H({-1: 1, 0: 1, 1: 1, 2: 1, 3: 1, 4: 1}), H({0: 1, 1: 1, 2: 1, 3: 1, 4: 1, 5: 1}), 6, 6)).h(slice(3)):
5.37 ms ± 98.5 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
(P(H({-2: 1, -1: 1, 0: 1, 1: 1, 2: 1, 3: 1}), H({-1: 1, 0: 1, 1: 1, 2: 1, 3: 1, 4: 1}), H({0: 1, 1: 1, 2: 1, 3: 1, 4: 1, 5: 1}), 6, 6, 6)).h(slice(2)):
32.4 ms ± 1.36 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
(P(H({-2: 1, -1: 1, 0: 1, 1: 1, 2: 1, 3: 1}), H({-1: 1, 0: 1, 1: 1, 2: 1, 3: 1, 4: 1}), H({0: 1, 1: 1, 2: 1, 3: 1, 4: 1, 5: 1}), 6, 6, 6)).h(slice(3)):
78.6 ms ± 918 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
(P(H({-2: 1, -1: 1, 0: 1, 1: 1, 2: 1, 3: 1}), H({-1: 1, 0: 1, 1: 1, 2: 1, 3: 1, 4: 1}), H({0: 1, 1: 1, 2: 1, 3: 1, 4: 1, 5: 1}), 6, 6, 6)).h(slice(4)):
80.8 ms ± 2.18 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
(P(H({-2: 1, -1: 1, 0: 1, 1: 1, 2: 1, 3: 1}), H({-1: 1, 0: 1, 1: 1, 2: 1, 3: 1, 4: 1}), H({0: 1, 1: 1, 2: 1, 3: 1, 4: 1, 5: 1}), 6, 6, 6)).h(slice(5)):
79.9 ms ± 2.81 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)

(P(H({-3: 1, -2: 1, -1: 1, 0: 1, 1: 1, 2: 1}), H({-2: 1, -1: 1, 0: 1, 1: 1, 2: 1, 3: 1}), H({-1: 1, 0: 1, 1: 1, 2: 1, 3: 1, 4: 1}), H({0: 1, 1: 1, 2: 1, 3: 1, 4: 1, 5: 1}))).h(slice(0)):
22.6 µs ± 438 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
(P(H({-3: 1, -2: 1, -1: 1, 0: 1, 1: 1, 2: 1}), H({-2: 1, -1: 1, 0: 1, 1: 1, 2: 1, 3: 1}), H({-1: 1, 0: 1, 1: 1, 2: 1, 3: 1, 4: 1}), H({0: 1, 1: 1, 2: 1, 3: 1, 4: 1, 5: 1}))).h(slice(1)):
8.98 ms ± 169 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
(P(H({-3: 1, -2: 1, -1: 1, 0: 1, 1: 1, 2: 1}), H({-2: 1, -1: 1, 0: 1, 1: 1, 2: 1, 3: 1}), H({-1: 1, 0: 1, 1: 1, 2: 1, 3: 1, 4: 1}), H({0: 1, 1: 1, 2: 1, 3: 1, 4: 1, 5: 1}))).h(slice(2)):
9.09 ms ± 205 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
(P(H({-3: 1, -2: 1, -1: 1, 0: 1, 1: 1, 2: 1}), H({-2: 1, -1: 1, 0: 1, 1: 1, 2: 1, 3: 1}), H({-1: 1, 0: 1, 1: 1, 2: 1, 3: 1, 4: 1}), H({0: 1, 1: 1, 2: 1, 3: 1, 4: 1, 5: 1}))).h(slice(3)):
9.09 ms ± 350 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
(P(H({-5: 1, -4: 1, -3: 1, -2: 1, -1: 1, 0: 1}), H({-4: 1, -3: 1, -2: 1, -1: 1, 0: 1, 1: 1}), H({-3: 1, -2: 1, -1: 1, 0: 1, 1: 1, 2: 1}), H({-2: 1, -1: 1, 0: 1, 1: 1, 2: 1, 3: 1}), H({-1: 1, 0: 1, 1: 1, 2: 1, 3: 1, 4: 1}), H({0: 1, 1: 1, 2: 1, 3: 1, 4: 1, 5: 1}))).h(slice(2)):
312 ms ± 9.72 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
(P(H({-5: 1, -4: 1, -3: 1, -2: 1, -1: 1, 0: 1}), H({-4: 1, -3: 1, -2: 1, -1: 1, 0: 1, 1: 1}), H({-3: 1, -2: 1, -1: 1, 0: 1, 1: 1, 2: 1}), H({-2: 1, -1: 1, 0: 1, 1: 1, 2: 1, 3: 1}), H({-1: 1, 0: 1, 1: 1, 2: 1, 3: 1, 4: 1}), H({0: 1, 1: 1, 2: 1, 3: 1, 4: 1, 5: 1}))).h(slice(3)):
314 ms ± 2.45 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
(P(H({-5: 1, -4: 1, -3: 1, -2: 1, -1: 1, 0: 1}), H({-4: 1, -3: 1, -2: 1, -1: 1, 0: 1, 1: 1}), H({-3: 1, -2: 1, -1: 1, 0: 1, 1: 1, 2: 1}), H({-2: 1, -1: 1, 0: 1, 1: 1, 2: 1, 3: 1}), H({-1: 1, 0: 1, 1: 1, 2: 1, 3: 1, 4: 1}), H({0: 1, 1: 1, 2: 1, 3: 1, 4: 1, 5: 1}))).h(slice(4)):
334 ms ± 10.2 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
(P(H({-5: 1, -4: 1, -3: 1, -2: 1, -1: 1, 0: 1}), H({-4: 1, -3: 1, -2: 1, -1: 1, 0: 1, 1: 1}), H({-3: 1, -2: 1, -1: 1, 0: 1, 1: 1, 2: 1}), H({-2: 1, -1: 1, 0: 1, 1: 1, 2: 1, 3: 1}), H({-1: 1, 0: 1, 1: 1, 2: 1, 3: 1, 4: 1}), H({0: 1, 1: 1, 2: 1, 3: 1, 4: 1, 5: 1}))).h(slice(5)):
324 ms ± 5.48 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

Source: perf_rolls_with_counts.ipy

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
from dyce import H, P

for n in (4, 6):
  p = n@P(6)
  for i in range(len(p) - 4, len(p)):
    print(f"({p}).h(slice({i})):")
    %timeit p.h(slice(i))

print()

for n in (2, 3):
  p = P(n@P(6), *[H(6) - m for m in range(n, 0, -1)])
  for i in range(len(p) - 4, len(p)):
    print(f"({p}).h(slice({i})):")
    %timeit p.h(slice(i))

print()

for n in (4, 6):
  p = P(*[H(6) - m for m in range(n, 0, -1)])
  for i in range(len(p) - 4, len(p)):
    print(f"({p}).h(slice({i})):")
    %timeit p.h(slice(i))

print()

Source code in dyce/p.py
@beartype
def rolls_with_counts(self, *which: _GetItemT) -> Iterator[_RollCountT]:
    r"""
    Returns an iterator yielding 2-tuples (pairs) that, collectively, enumerate all
    possible outcomes for the pool. The first item in the 2-tuple is a sorted
    sequence of the outcomes for a distinct roll. The second is the count for that
    roll. Outcomes in each roll are ordered least to greatest.

    If one or more arguments are provided, this methods selects subsets of outcomes
    for each roll. Outcomes in each roll are ordered from least (index ``#!python
    0``) to greatest (index ``#!python -1`` or ``#!python len(self) - 1``).
    Identifiers can be ``#!python int``s or ``#!python slice``s, and can be mixed
    for more flexible selections.

    ``` python
    >>> from collections import Counter
    >>> def accumulate_roll_counts(counter, roll_counts):
    ...   for roll, count in roll_counts:
    ...     counter[roll] += count
    ...   return counter
    >>> p_6d6 = 6@P(6)
    >>> every_other_d6 = accumulate_roll_counts(Counter(), p_6d6.rolls_with_counts(slice(None, None, -2))) ; every_other_d6
    Counter({(6, 4, 2): 4110, (6, 5, 3): 3390, (6, 4, 3): 3330, ..., (3, 3, 3): 13, (2, 2, 2): 7, (1, 1, 1): 1})
    >>> accumulate_roll_counts(Counter(), p_6d6.rolls_with_counts(5, 3, 1)) == every_other_d6
    True
    >>> accumulate_roll_counts(Counter(), p_6d6.rolls_with_counts(*range(5, 0, -2))) == every_other_d6
    True
    >>> accumulate_roll_counts(Counter(), p_6d6.rolls_with_counts(*(i for i in range(6, 0, -1) if i % 2 == 1))) == every_other_d6
    True

    ```

    One way to model the likelihood of achieving a “Yhatzee” (i.e., where five
    six-sided dice show the same face) on a single roll by checking rolls for where
    the least and greatest outcomes are the same.

    ``` python
    >>> p_5d6 = 5@P(6)
    >>> yhatzee_on_single_roll = H(
    ...   (1 if roll[0] == roll[-1] else 0, count)
    ...   for roll, count
    ...   in p_5d6.rolls_with_counts()
    ... )
    >>> print(yhatzee_on_single_roll.format(width=0))
    {..., 0: 99.92%, 1:  0.08%}

    ```

    !!! note "In the general case, rolls may appear more than once."

    ``` python
    >>> list(P(H(2), H(3)).rolls_with_counts())
    [((1, 1), 1), ((1, 2), 1), ((1, 3), 1), ((1, 2), 1), ((2, 2), 1), ((2, 3), 1)]

    ```

    In the above, ``#!python (1, 2)`` appears a total of two times, each with counts
    of one.

    However, if the pool is homogeneous (meaning it only contains identical
    histograms), rolls (before selection) are not repeated. (See the note on
    implementation below.)

    ``` python
    >>> list((2@P(H((-1, 0, 1)))).rolls_with_counts())
    [((-1, -1), 1), ((-1, 0), 2), ((-1, 1), 2), ((0, 0), 1), ((0, 1), 2), ((1, 1), 1)]

    ```

    Either way, by summing and counting all rolls, we can confirm identity.

    ``` python
    >>> d6 = H(6)
    >>> d6avg = H((2, 3, 3, 4, 4, 5))
    >>> p = 2@P(d6, d6avg)
    >>> H((sum(roll), count) for roll, count in p.rolls_with_counts()) == p.h() == d6 + d6 + d6avg + d6avg
    True

    ```

    This method does not try to outsmart callers by (mis)interpreting selection
    arguments. It honors selection identifier order and any redundancy.

    ``` python
    >>> p_d3_d4 = P(H(3), H(4))
    >>> # Select the second, first, then second (again) elements
    >>> list(p_d3_d4.rolls_with_counts(-1, 0, 1))
    [((1, 1, 1), 1), ((2, 1, 2), 1), ((3, 1, 3), 1), ((4, 1, 4), 1), ..., ((3, 1, 3), 1), ((3, 2, 3), 1), ((3, 3, 3), 1), ((4, 3, 4), 1)]

    ```

    Selecting the same outcomes, but in a different order is not immediately
    comparable.

    ``` python
    >>> select_0_1 = list(p_d3_d4.rolls_with_counts(0, 1))
    >>> select_1_0 = list(p_d3_d4.rolls_with_counts(1, 0))
    >>> select_0_1 == select_1_0
    False

    ```

    Equivalence can be tested when selected outcomes are sorted.

    ``` python
    >>> sorted_0_1 = [(sorted(roll), count) for roll, count in select_0_1]
    >>> sorted_1_0 = [(sorted(roll), count) for roll, count in select_1_0]
    >>> sorted_0_1 == sorted_1_0
    True

    ```

    They can also be summed and counted which is equivalent to calling the
    [``h`` method][dyce.p.P.h] with identical selection arguments.

    ``` python
    >>> summed_0_1 = H((sum(roll), count) for roll, count in select_0_1)
    >>> summed_1_0 = H((sum(roll), count) for roll, count in select_1_0)
    >>> summed_0_1 == summed_1_0 == p_d3_d4.h(0, 1) == p_d3_d4.h(1, 0)
    True

    ```

    !!! info "About the implementation"

        Enumeration is substantially more efficient for homogeneous pools than
        heterogeneous ones, because we are able to avoid the expensive enumeration
        of the Cartesian product using several techniques.

        Taking $k$ outcomes, where $k$ selects fewer than all $n$ outcomes a
        homogeneous pool benefits from [Ilmari Karonen’s
        optimization](https://rpg.stackexchange.com/a/166663/71245), which appears
        to scale geometrically with $k$ times some factor of $n$ (e.g., $\log n$,
        but I haven’t bothered to figure that out yet), such that—in observed
        testing, at least—it is generally the fastest approach for $k < n$.

        Where $k = n$, we leverage the [*multinomial
        coefficient*](https://en.wikipedia.org/wiki/Permutation#Permutations_of_multisets),
        which appears to scale generally with $n$.

        $$
        {{n} \choose {{{k}_{1}},{{k}_{2}},\ldots,{{k}_{m}}}}
        = {\frac {{n}!} {{{k}_{1}}! {{k}_{2}}! \cdots {{k}_{m}}!}}
        $$

        We enumerate combinations with replacements, and then the compute the number
        of permutations with repetitions for each combination. Consider ``#!python
        n@P(H(m))``. Enumerating combinations with replacements would yield all
        unique rolls.

        ``#!python ((1, 1, …, 1), (1, 1, …, 2), …, (1, 1, …, m), …, (m - 1, m, …,
        m), (m, m, …, m))``

        To determine the count for a particular roll ``#!python (a, b, …, n)``, we
        compute the multinomial coefficient for that roll and multiply by the scalar
        ``#!python H(m)[a] * H(m)[b] * … * H(m)[n]``. (See
        [this](https://www.lucamoroni.it/the-dice-roll-sum-problem/) for an in-depth
        exploration of the topic.)

        Further, this implementation attempts to optimize heterogeneous pools by
        breaking them into homogeneous groups before computing the Cartesian product
        of those sub-results. This approach allows homogeneous pools to be ordered
        without duplicates, where heterogeneous ones offer no such guarantees.

        As expected, this optimization allows the performance of arbitrary selection
        from mixed pools to sit between that of purely homogeneous and purely
        heterogeneous ones. Note, however, that all three appear to scale
        geometrically in some way.

        ``` python
        --8<-- "docs/assets/perf_rolls_with_counts.txt"
        ```

        <details>
        <summary>Source: <a href="https://github.com/posita/dyce/blob/latest/docs/assets/perf_rolls_with_counts.ipy"><code>perf_rolls_with_counts.ipy</code></a></summary>

        ``` python
        --8<-- "docs/assets/perf_rolls_with_counts.ipy"
        ```
        </details>
    """
    n = len(self)

    if not which:
        i: Optional[int] = n
    else:
        i = _analyze_selection(n, which)

    if i == 0 or n == 0:
        rolls_with_counts_iter: Iterable[_RollCountT] = iter(())
    else:
        groups = tuple((h, sum(1 for _ in hs)) for h, hs in groupby(self))

        if len(groups) == 1:
            # Based on cursory performance analysis, calling the homogeneous
            # implementation directly provides about a 15% performance savings over
            # merely falling through to _rwc_heterogeneous_h_groups. Maybe
            # itertools.product adds significant overhead?
            h, hn = groups[0]
            assert hn == n

            # Still in search of a better (i.e., more efficient) way:
            # https://math.stackexchange.com/questions/4173084/probability-distribution-of-k-1-k-2-cdots-k-m-selections-of-arbitrary-posi
            if i and abs(i) < n:
                rolls_with_counts_iter = (
                    _rwc_homogeneous_n_h_using_karonen_partial_selection(
                        h, n, i, fill=0
                    )
                )
            else:
                rolls_with_counts_iter = (
                    _rwc_homogeneous_n_h_using_multinomial_coefficient(h, n)
                )
        else:
            rolls_with_counts_iter = _rwc_heterogeneous_h_groups(groups, i)

    for sorted_outcomes_for_roll, roll_count in rolls_with_counts_iter:
        if which:
            taken_outcomes = tuple(getitems(sorted_outcomes_for_roll, which))
        else:
            taken_outcomes = sorted_outcomes_for_roll

        yield taken_outcomes, roll_count

umap(self, op: _UnaryOperatorT) -> P

Shorthand for P(*(h.umap(op) for h in self)). See the H.umap method.

1
2
3
4
5
6
>>> import operator
>>> p_3d6 = 3@P(H((-3, -1, 2, 4)))
>>> p_3d6.umap(operator.__neg__)
P(H({-4: 1, -2: 1, 1: 1, 3: 1}), H({-4: 1, -2: 1, 1: 1, 3: 1}), H({-4: 1, -2: 1, 1: 1, 3: 1}))
>>> p_3d6.umap(operator.__abs__)
P(H({1: 1, 2: 1, 3: 1, 4: 1}), H({1: 1, 2: 1, 3: 1, 4: 1}), H({1: 1, 2: 1, 3: 1, 4: 1}))
Source code in dyce/p.py
@beartype
def umap(self, op: _UnaryOperatorT) -> P:
    r"""
    Shorthand for ``#!python P(*(h.umap(op) for h in self))``. See the
    [``H.umap`` method][dyce.h.H.umap].

    ``` python
    >>> import operator
    >>> p_3d6 = 3@P(H((-3, -1, 2, 4)))
    >>> p_3d6.umap(operator.__neg__)
    P(H({-4: 1, -2: 1, 1: 1, 3: 1}), H({-4: 1, -2: 1, 1: 1, 3: 1}), H({-4: 1, -2: 1, 1: 1, 3: 1}))
    >>> p_3d6.umap(operator.__abs__)
    P(H({1: 1, 2: 1, 3: 1, 4: 1}), H({1: 1, 2: 1, 3: 1, 4: 1}), H({1: 1, 2: 1, 3: 1, 4: 1}))

    ```
    """
    return P(*(h.umap(op) for h in self))