Performs batches of work in parallel without respecting original order.
does Iterable does Sequence
method iterator(RaceSeq: --> Iterator)
Returns the underlying iterator.
method grep(RaceSeq: , *)
grep to the
RaceSeq similarly to how it would do it on a
my = (^10000).map(*²).race;.grep( * %% 3 ).say;# OUTPUT: «(0 9 36 81 144 ...)»
When you use
race on a
Seq, this is the method that is actually called.
method map(RaceSeq: , *)
Uses maps on the
RaceSeq, generally created by application of
.race to a preexisting
RaceSeq created from a
Returns the object.
HyperSeq object out of the current one.
multi method serial(RaceSeq:)
Converts the object to a
Seq and returns it.
method is-lazy(--> False )
method sink(--> Nil)
Sinks the underlying data structure, producing any side effects.
RaceSeq does role Iterable, which provides the following routines:
method iterator(--> Iterator)
Method stub that ensures all classes doing the
Iterable role have a method
It is supposed to return an Iterator.
method flat(Iterable: --> Iterable)
Returns another Iterable that flattens out all iterables that the first one returns.
say (<a b>, 'c').elems; # OUTPUT: «2␤»say (<a b>, 'c').flat.elems; # OUTPUT: «3␤»
<a b> is a List and thus iterable, so
(<a b>, 'c').flat returns
('a', 'b', 'c'), which has three elems.
Note that the flattening is recursive, so
((("a", "b"), "c"), "d").flat returns
("a", "b", "c", "d"), but it does not flatten itemized sublists:
say ($('a', 'b'), 'c').raku; # OUTPUT: «($("a", "b"), "c")␤»
say ($('a', 'b'), 'c')>>.List.flat.elems; # OUTPUT: «3␤»
method lazy(--> Iterable)
Returns a lazy iterable wrapping the invocant.
say (1 ... 1000).is-lazy; # OUTPUT: «False␤»say (1 ... 1000).lazy.is-lazy; # OUTPUT: «True␤»
method hyper(Int(Cool) : = 64, Int(Cool) : = 4)
Returns another Iterable that is potentially iterated in parallel, with a given batch size and degree of parallelism.
The order of elements is preserved.
hyper in situations where it is OK to do the processing of items in parallel, and the output order should be kept relative to the input order. See
race for situations where items are processed in parallel and the output order does not matter.
degree option (short for "degree of parallelism") configures how many parallel workers should be started. To start 4 workers (e.g. to use at most 4 cores), pass
:4degree to the
race method. Note that in some cases, choosing a degree higher than the available CPU cores can make sense, for example I/O bound work or latency-heavy tasks like web crawling. For CPU-bound work, however, it makes no sense to pick a number higher than the CPU core count.
batch size option configures the number of items sent to a given parallel worker at once. It allows for making a throughput/latency trade-off. If, for example, an operation is long-running per item, and you need the first results as soon as possible, set it to 1. That means every parallel worker gets 1 item to process at a time, and reports the result as soon as possible. In consequence, the overhead for inter-thread communication is maximized. In the other extreme, if you have 1000 items to process and 10 workers, and you give every worker a batch of 100 items, you will incur minimal overhead for dispatching the items, but you will only get the first results when 100 items are processed by the fastest worker (or, for
hyper, when the worker getting the first batch returns.) Also, if not all items take the same amount of time to process, you might run into the situation where some workers are already done and sit around without being able to help with the remaining work. In situations where not all items take the same time to process, and you don't want too much inter-thread communication overhead, picking a number somewhere in the middle makes sense. Your aim might be to keep all workers about evenly busy to make best use of the resources available.
You can also check out this blog post on the semantics of hyper and race
method race(Int(Cool) : = 64, Int(Cool) : = 4 --> Iterable)
Returns another Iterable that is potentially iterated in parallel, with a given batch size and degree of parallelism (number of parallel workers).
race does not preserve the order of elements.
Use race in situations where it is OK to do the processing of items in parallel, and the output order does not matter. See
hyper for situations where you want items processed in parallel and the output order should be kept relative to the input order.
hyper for an explanation of
RaceSeq does role Sequence, which provides the following routines:
multi method Str(::?CLASS:)
multi method Stringy(::?CLASS:)
.Stringy on the cached sequence.
Returns the number of elements in the cached sequence.
multi method AT-POS(::?CLASS: Int )multi method AT-POS(::?CLASS: int )
Returns the element at position
$idx in the cached sequence.
multi method EXISTS-POS(::?CLASS: Int )multi method EXISTS-POS(::?CLASS: int )
Bool indicating whether there is an element at position
$idx in the cached sequence.
method eager(::?CLASS: --> List)
my = lazy 1..5;say .is-lazy; # OUTPUT: «True␤»say .eager; # OUTPUT: «(1 2 3 4 5)␤»say .eager;CATCH# OUTPUT: «Throws exception if already consumed␤»
method fmt( = '%s', = ' ' --> Str)
multi method gist(::?CLASS:)
RaceSeq does role PositionalBindFailover, which provides the following routines:
method cache(PositionalBindFailover: --> List)
Returns a List based on the
iterator method, and caches it. Subsequent calls to
cache always return the same
method list(PositionalBindFailover: --> List)
Returns a List based on the
iterator method without caching it.
This method stub ensure that a class implementing role
PositionalBindFailover provides an