Contextual Bernoulli multi-armed bandit with one context feature active per t.
bandit <- ContextualBasicBandit$new(weights)
weights
numeric matrix; d x k
matrix with probabilities of reward for d
contextual features
per k
arms
new(weights)
generates and initializes a new ContextualBasicBandit
instance.
get_context(t)
argument:
t
: integer, time step t
.
list
containing the current d x k
dimensional matrix context$X
,
the number of arms context$k
and the number of features context$d
.get_reward(t, context, action)
arguments:
t
: integer, time step t
.
context
: list, containing the current context$X
(d x k context matrix),
context$k
(number of arms) and context$d
(number of context features)
(as set by bandit
).
action
: list, containing action$choice
(as set by policy
).
list
containing reward$reward
and, where computable,
reward$optimal
(used by "oracle" policies and to calculate regret).Core contextual classes: Bandit
, Policy
, Simulator
,
Agent
, History
, Plot
Bandit subclass examples: ContextualBasicBandit
, ContextualLogitBandit
, OfflinePolicyEvaluatorBandit
Policy subclass examples: EpsilonGreedyPolicy
, ContextualThompsonSamplingPolicy
# NOT RUN { horizon <- 100 sims <- 100 policy <- EpsilonGreedyPolicy$new(epsilon = 0.1) bandit <- ContextualBasicBandit$new(weights = c(0.6, 0.1, 0.1)) agent <- Agent$new(policy,bandit) history <- Simulator$new(agent, horizon, sims)$run() plot(history, type = "cumulative", regret = TRUE) # }