2019年概率统计及相关领域学术论坛

发布时间:2019年07月31日 作者:刘源远   消息来源:    阅读次数:[]

时间: 2019年8月1日地点:数学与统计学院一楼145

时间

内容

主持人

08:30-09:30

报告:Equilibrium of Decentralized Decisions on Queueing

报告人:王家礼 台湾东华大学

报告摘要:Suppose that customers arriving at a queueing system have different tolerances for waiting. They first observe the number of customers in system upon arrival, then decide based on the expected waiting whether to join for service or balk. This decentralized decision problem is a noncooperative game; heterogeneous customers compete for limited service capacity. Under general rules of service, the system is shown to have at most one strategy, the collection of individual decisions, which will yield equilibrium state. We study a few properties and class dominance of the equilibrium strategy to gain insights into the dependence and competition of decentralized decisions. On the other hand, if a system has no equilibrium strategy and thus fluctuates, we show that it can be stabilized by a service rate adjustment. This operational means, unlike the common pricing scheme, has the merit of incentive compatibility. For the associated fairness issue, we take the axiomatic approach to define a few criteria, and show how to find the optimal adjustment under each criterion.

刘源远

09:30-10:30

报告: Nonzero-sum stochastic games with probability criteria

报告人:郭先平 中山大学

报告摘要:In this talk, we consider two-person nonzero-sum discrete-time

stochastic games under the probability criterion. First, we give a

characterization of the probability criterion. Then, under a mild condition, we

establish the existence of a Nash equilibrium. Finally, a queueing system is

provided to show the application of our main result.


茶歇 20分钟



10:50-11:50

报告:Variance Minimization of MDPs and Its Application to Fluctuation Reduction of Renewable Energy

报告人:夏俐 中山大学

报告摘要:In this talk, we study a variance minimization problem in an infinite stage discrete time Markov decision process (MDP), regardless of the mean performance. For the Markov chain under the steady variance criterion, since the value of the cost function at the current stage will be affected by future actions, this problem is not a standard MDP and the traditional MDP theory is not applicable. The principle of time consistency in dynamic programming fails and the Bellman optimality equation does not hold. Weconvert the variance minimization problem into a standard MDP by introducing a concept called pseudo variance. Then we derive a variance difference formula that quantifies the difference of variances of Markov systems under any two policies. With this difference formula, we develop a policy iteration type algorithm to efficiently reduce the variance of MDPs. We demonstrate the effectiveness of our approach by applying it to the fluctuation reduction problem of renewable energy with storage systems.

12:00-13:00

中餐


15:00-16:00

报告:Asymptotic behaviors of uniform error for the NPMLE in the current status model

报告人:高付清 武汉大学

报告摘要:We study the asymptotic behaviors of the uniform error of the nonparametric maximum likelihood estimator in the current status model. We obtain the uniform asymptotic distribution, the Cramer-type uniform moderate deviations, the self-normalized uniform asymptotic distributions, and the self-normalized Cramer-type uniform moderate deviations for the estimator.


16:00-17:00

报告:Optimal Dividend Problems for Sparre Andersen Risk Model

报告人:刘国欣 河北工业大学

报告摘要:In this talk, we study the optimal dividend problem for Sparre Andersen risk model with arbitrary inter-claim time distribution. The analytic characterizations of admissible strategies and Markov strategies are given. We use the measure-valued generator theory to derive a measure-valued dynamic programming equation. The value function is proved to be of locally finite variation along the path, which belongs to the domain of the measure-valued generator. The verification theorem is proved without additional assumptions on the regularity of the value function. Actually, the value function may have jumps. Under certain conditions, the optimal strategy is presented as a Markov strategy with space-time band structure. At last,A successive approximation scheme for both value function and optimal strategy is presented with numerical.



下一篇:学术报告

打印】【收藏】 【关闭