You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
First of all, I would like to say thank you to all of the contributors of this useful package!
I am a learner of both RL and this package. I wonder if RL or this package can deal with problems that have a continuous action space or a mixed integer action space. Specifically, suppose we have a decision making problem that at each step we are to make a decision (an action) $a$ according to current state $s$ and an observation of the random noise $\omega$, i.e., the action space $\mathcal{A}$ is characterized by some constraints such as $\mathcal{A} = { a| f(a, s, \omega) \le 0 }$ (BTW, why can't github display {}?). Can RL deal with this kind of problems? And how can I write such a environment using RenforcementLearning.jl?
Looking forward to your reply at your convenience! Thanks!
The text was updated successfully, but these errors were encountered:
Yes, but how to model the action space mainly depends on your problem. For example, you can use negative log-normal distribution. Or you can split the action space into discrete bins then randomly sample from it.
Or you can split the action space into discrete bins then randomly sample from it.
But I wonder how should I split the action space? Since the action space is not explicitly given, but is implicitly determined by a set of constraints, how do I obtain and split this action space?
By the way, could you recommend some tutorials that are suitable for learning reinforcement learning and this package at the same time? Thank you very much!
Since the action space is not explicitly given, but is implicitly determined by a set of constraints, how do I obtain and split this action space?
It's hard to answer without further information here.
By the way, could you recommend some tutorials that are suitable for learning reinforcement learning and this package at the same time? Thank you very much!
I'd love to, but since I haven't done any RL related work recently, I'd leave it for others to answer it.
First of all, I would like to say thank you to all of the contributors of this useful package!
I am a learner of both RL and this package. I wonder if RL or this package can deal with problems that have a continuous action space or a mixed integer action space. Specifically, suppose we have a decision making problem that at each step we are to make a decision (an action)$a$ according to current state $s$ and an observation of the random noise $\omega$ , i.e., the action space $\mathcal{A}$ is characterized by some constraints such as $\mathcal{A} = { a| f(a, s, \omega) \le 0 }$ (BTW, why can't github display
{}
?). Can RL deal with this kind of problems? And how can I write such a environment usingRenforcementLearning.jl
?Looking forward to your reply at your convenience! Thanks!
The text was updated successfully, but these errors were encountered: