情報理論SGセミナー
26 イベント
-
Accelerated equilibration in classical stochastic systems
2021年1月13日(水) 13:00 - 14:00
足立 景亮 (理化学研究所 数理創造プログラム (iTHEMS) 基礎科学特別研究員 / 理化学研究所 生命機能科学研究センター (BDR) 生体非平衡物理学理研白眉研究チーム 基礎科学特別研究員)
Shortcuts to adiabaticity (STA) [1] are processes that make a given quantum state evolve into a target state in a fast manner, which can be useful to avoid decoherence in quantum experiments. In this journal club, I will concisely review the concept of STA, and then focus on the recently proposed classical counterparts of STA, sometimes called engineered swift equilibration, in Brownian particle systems [2] and evolutionary systems [3].
会場: via Zoom
イベント公式言語: 英語
-
Review on the Lieb-Robinson bound
2020年12月23日(水) 13:00 - 14:00
後藤 ゆきみ (理化学研究所 数理創造プログラム (iTHEMS) 基礎科学特別研究員)
The Lieb-Robinson bound is inequality on the group velocity of information propagation for quantum many-body systems. In this talk, I review this bound mathematically and explain some consequences of the bound.
イベント公式言語: 英語
-
Quantum Wasserstein distance of order 1
2020年12月16日(水) 13:00 - 14:30
濱崎 立資 (理化学研究所 数理創造プログラム (iTHEMS) 上級研究員 / 理化学研究所 開拓研究本部 (CPR) 濱崎非平衡量子統計力学理研白眉研究チーム 理研白眉研究チームリーダー)
The Wasserstein distance is an indicator for the closeness of two probability distributions and is applied to various fields ranging from information theory to neural networks [1]. It is particularly useful to treat the geometry of the underlying space, such as tensor-product structures. In this journal club, I talk about one of the recent proposals on quantum extension of the Wasserstein distance [2]. After reviewing basic properties of classical Wasserstein distance, e.g., its relation to concentration phenomena, I discuss how they might be generalized to quantum realm.
会場: via Zoom
イベント公式言語: 英語
-
セミナー
Statistical model for meaning representation of language
2020年12月16日(水) 10:30 - 12:00
吉野 幸一郎 (理化学研究所 科技ハブ産連本部 (RCSTI) ロボティクスプロジェクト チームリーダー)
One of the final goals of natural language processing is building a model to capture the semantic meaning of language elements. Language modeling is a recent research trend to build a statistical model to express the meaning of language. The language model is based on the distributional hypothesis. The distributional hypothesis indicates that the surrounding elements of the target element describe the meaning of the element. In other words, relative positions between sentence elements (morphologies, words, and sentences) are essential to know the element's meaning. Recent works on distributed representation mainly focus on relations between clear elements: characters, morphologies, words, and sentences. However, it is essential to use structural information of languages such as dependency and semantic roles for building a human-understandable statistical model of languages. In this talk, we describe the statistical language model's basis and then discuss our research direction to introduce the language structure.
会場: via Zoom
イベント公式言語: 英語
-
Journal Club of Information Theory SG II
2020年12月8日(火) 13:00 - 14:00
田中 章詞 (理化学研究所 数理創造プログラム (iTHEMS) 上級研究員)
The practical updating process of deep neural networks based on stochastic gradient descent is quite similar to stochastic dynamics described by Langevin equation. Under the Langevin system, we can "derive" 2nd law of thermodynamics, i.e. increasing the total entropy of the system. This fact suggests "2nd law of thermodynamics in deep learning." In this talk, I would like to explain this idea roughly, and there will be no concrete new result, but it may provide us new perspectives to study neural networks, I hope.
会場: via Zoom
イベント公式言語: 英語
-
Journal Club of Information Theory SG
2020年12月1日(火) 13:00 - 14:00
田中 章詞 (理化学研究所 数理創造プログラム (iTHEMS) 上級研究員)
The practical updating process of deep neural networks based on stochastic gradient descent is quite similar to stochastic dynamics described by Langevin equation. Under the Langevin system, we can "derive" 2nd law of thermodynamics, i.e. increasing the total entropy of the system. This fact suggests "2nd law of thermodynamics in deep learning." In this talk, I would like to explain this idea roughly, and there will be no concrete new result, but it may provide us new perspectives to study neural networks, I hope. *Detailed information about the seminar refer to the email.
会場: via Zoom
イベント公式言語: 英語
26 イベント
イベント
カテゴリ
シリーズ
- iTHEMSコロキウム
- MACSコロキウム
- iTHEMSセミナー
- iTHEMS数学セミナー
- Dark Matter WGセミナー
- iTHEMS生物学セミナー
- 理論物理学セミナー
- 情報理論SGセミナー
- Quantum Matterセミナー
- ABBL-iTHEMSジョイントアストロセミナー
- Math-Physセミナー
- Quantum Gravity Gatherings
- RIKEN Quantumセミナー
- Quantum Computation SGセミナー
- Asymptotics in Astrophysics SG Seminar
- GW-EOS WGセミナー
- DEEP-INセミナー
- NEW WGセミナー
- Lab-Theory Standing Talks
- 場の量子論セミナー
- STAMPセミナー
- QuCoInセミナー
- Number Theory Seminar
- 産学連携数理レクチャー
- Berkeley-iTHEMSセミナー
- iTHEMS-仁科センター中間子科学研究室ジョイントセミナー
- RIKEN Quantumレクチャー
- 作用素環論
- iTHEMS集中講義-Evolution of Cooperation
- 公開鍵暗号概論
- 結び目理論
- iTHES理論科学コロキウム
- SUURI-COOLセミナー
- iTHESセミナー