Novel multiple access protocols against Q-learning-based tunnel monitoring using flying ad hoc networks
Published 2023 View Full Article
- Home
- Publications
- Publication Search
- Publication Details
Title
Novel multiple access protocols against Q-learning-based tunnel monitoring using flying ad hoc networks
Authors
Keywords
-
Journal
WIRELESS NETWORKS
Volume -, Issue -, Pages -
Publisher
Springer Science and Business Media LLC
Online
2023-10-31
DOI
10.1007/s11276-023-03534-y
References
Ask authors/readers for more resources
Related references
Note: Only part of the references are listed.- Ideal refocusing of an optically active spin qubit under strong hyperfine interactions
- (2023) Leon Zaporski et al. Nature Nanotechnology
- Safe reinforcement learning under temporal logic with reward design and quantum action selection
- (2023) Mingyu Cai et al. Scientific Reports
- An energy-aware routing method using firefly algorithm for flying ad hoc networks
- (2023) Jan Lansky et al. Scientific Reports
- A value-based deep reinforcement learning model with human expertise in optimal treatment of sepsis
- (2023) XiaoDan Wu et al. npj Digital Medicine
- Angiotensin blockade enhances motivational reward learning via enhancing striatal prediction error signaling and frontostriatal communication
- (2023) Ting Xu et al. MOLECULAR PSYCHIATRY
- Millimeter assisted wave technologies in 6G assisted wireless communication systems: a new paradigm for 6G collaborative learning
- (2023) P. V. Venkateswara Rao et al. WIRELESS NETWORKS
- Searching for spin glass ground states through deep reinforcement learning
- (2023) Changjun Fan et al. Nature Communications
- Novel plant–frugivore network on Mauritius is unlikely to compensate for the extinction of seed dispersers
- (2023) Julia H. Heinen et al. Nature Communications
- Railway infrastructure maintenance efficiency improvement using deep reinforcement learning integrated with digital twin based on track geometry and component defects
- (2023) Jessada Sresakoolchai et al. Scientific Reports
- Continuous improvement of self-driving cars using dynamic confidence-aware reinforcement learning
- (2023) Zhong Cao et al. Nature Machine Intelligence
- Thermal neutron beam optimization for PGNAA applications using Q-learning algorithm and neural network
- (2022) Mona Zolfaghari et al. Scientific Reports
- A pushing-grasping collaborative method based on deep Q-network algorithm in dual viewpoints
- (2022) Gang Peng et al. Scientific Reports
- Deep reinforcement learning for self-tuning laser source of dissipative solitons
- (2022) Evgeny Kuprikov et al. Scientific Reports
- A hybrid optimization with ensemble learning to ensure VANET network stability based on performance analysis
- (2022) Gagan Preet Kour Marwah et al. Scientific Reports
- An energy-aware and Q-learning-based area coverage for oil pipeline monitoring systems using sensors and Internet of Things
- (2022) Amir Masoud Rahmani et al. Scientific Reports
- Mobile device-based Bluetooth Low Energy Database for range estimation in indoor environments
- (2022) Pavel Pascacio et al. Scientific Data
- A novel approach to improve network validity using various soft computing techniques
- (2022) R. Lakshmana Kumar et al. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS
- A hybrid optical–wireless network for decimetre-level terrestrial positioning
- (2022) Jeroen C. J. Koelemeij et al. NATURE
- Dopamine promotes head direction plasticity during orienting movements
- (2022) Yvette E. Fisher et al. NATURE
- A Q-learning-based routing scheme for smart air quality monitoring system using flying ad hoc networks
- (2022) Jan Lansky et al. Scientific Reports
- Design and control of soft biomimetic pangasius fish robot using fin ray effect and reinforcement learning
- (2022) Samuel M. Youssef et al. Scientific Reports
- Reverse-engineering the cortical architecture for controlled semantic cognition
- (2021) Rebecca L. Jackson et al. Nature Human Behaviour
- Single cell plasticity and population coding stability in auditory thalamus upon associative learning
- (2021) James Alexander Taylor et al. Nature Communications
- Robust diagnostic classification via Q-learning
- (2021) Victor Ardulov et al. Scientific Reports
- Contact ability based topology control for predictable delay-tolerant networks
- (2021) Hongsheng Chen et al. Scientific Reports
- Nash equilibria in human sensorimotor interactions explained by Q-learning with intrinsic costs
- (2021) Cecilia Lindig-León et al. Scientific Reports
- Improvising packet delivery and reducing delay ratio in mobile ad hoc network using neighbor coverage-based topology control algorithm
- (2020) K. Rajakumari et al. INTERNATIONAL JOURNAL OF COMMUNICATION SYSTEMS
- Multilayer optical thin film design with deep Q learning
- (2020) Anqing Jiang et al. Scientific Reports
- Realizing a deterministic source of multipartite-entangled photonic qubits
- (2020) Jean-Claude Besse et al. Nature Communications
- Reward sensitivity differs depending on global self-esteem in value-based decision-making
- (2020) Aya Ogasawara et al. Scientific Reports
- Femtosecond time synchronization of optical clocks off of a flying quadcopter
- (2019) Hugo Bergeron et al. Nature Communications
- Double-deep Q-learning to increase the efficiency of metasurface holograms
- (2019) Iman Sajedian et al. Scientific Reports
- Spontaneous eye blink rate predicts individual differences in exploration and exploitation during reinforcement learning
- (2019) Joanne C. Van Slooten et al. Scientific Reports
- A sublethal dose of a neonicotinoid insecticide disrupts visual processing and collision avoidance behaviour in Locusta migratoria
- (2017) Rachel H. Parkinson et al. Scientific Reports
- Percolation-theoretic bounds on the cache size of nodes in mobile opportunistic networks
- (2017) Peiyan Yuan et al. Scientific Reports
- Human-level control through deep reinforcement learning
- (2015) Volodymyr Mnih et al. NATURE
- Action-value comparisons in the dorsolateral prefrontal cortex control choice between goal-directed actions
- (2014) Richard W. Morris et al. Nature Communications
Create your own webinar
Interested in hosting your own webinar? Check the schedule and propose your idea to the Peeref Content Team.
Create NowAsk a Question. Answer a Question.
Quickly pose questions to the entire community. Debate answers and get clarity on the most important issues facing researchers.
Get Started