Journal Description
Entropy
Entropy
is an international and interdisciplinary peer-reviewed open access journal of entropy and information studies, published monthly online by MDPI. The International Society for the Study of Information (IS4SI) and Spanish Society of Biomedical Engineering (SEIB) are affiliated with Entropy and their members receive a discount on the article processing charge.
- Open Access— free for readers, with article processing charges (APC) paid by authors or their institutions.
- High Visibility: indexed within Scopus, SCIE (Web of Science), Inspec, PubMed, PMC, Astrophysics Data System, and other databases.
- Journal Rank: JCR - Q2 (Physics, Multidisciplinary) / CiteScore - Q1 (Mathematical Physics)
- Rapid Publication: manuscripts are peer-reviewed and a first decision is provided to authors approximately 20.8 days after submission; acceptance to publication is undertaken in 2.9 days (median values for papers published in this journal in the second half of 2023).
- Recognition of Reviewers: reviewers who provide timely, thorough peer-review reports receive vouchers entitling them to a discount on the APC of their next publication in any MDPI journal, in appreciation of the work done.
- Testimonials: See what our editors and authors say about Entropy.
- Companion journals for Entropy include: Foundations, Thermo and MAKE.
Impact Factor:
2.7 (2022);
5-Year Impact Factor:
2.6 (2022)
Latest Articles
A Convolutional Neural Network-Based Quantization Method for Block Compressed Sensing of Images
Entropy 2024, 26(6), 468; https://doi.org/10.3390/e26060468 (registering DOI) - 29 May 2024
Abstract
Block compressed sensing (BCS) is a promising method for resource-constrained image/video coding applications. However, the quantization of BCS measurements has posed a challenge, leading to significant quantization errors and encoding redundancy. In this paper, we propose a quantization method for BCS measurements using
[...] Read more.
Block compressed sensing (BCS) is a promising method for resource-constrained image/video coding applications. However, the quantization of BCS measurements has posed a challenge, leading to significant quantization errors and encoding redundancy. In this paper, we propose a quantization method for BCS measurements using convolutional neural networks (CNN). The quantization process maps measurements to quantized data that follow a uniform distribution based on the measurements’ distribution, which aims to maximize the amount of information carried by the quantized data. The dequantization process restores the quantized data to data that conform to the measurements’ distribution. The restored data are then modified by the correlation information of the measurements drawn from the quantized data, with the goal of minimizing the quantization errors. The proposed method uses CNNs to construct quantization and dequantization processes, and the networks are trained jointly. The distribution parameters of each block are used as side information, which is quantized with 1 bit by the same method. Extensive experiments on four public datasets showed that, compared with uniform quantization and entropy coding, the proposed method can improve the PSNR by an average of 0.48 dB without using entropy coding when the compression bit rate is 0.1 bpp.
Full article
(This article belongs to the Special Issue Information Theory and Coding for Image/Video Processing)
►
Show Figures
Open AccessArticle
A TCN-Linear Hybrid Model for Chaotic Time Series Forecasting
by
Mengjiao Wang and Fengtai Qin
Entropy 2024, 26(6), 467; https://doi.org/10.3390/e26060467 (registering DOI) - 29 May 2024
Abstract
The applications of deep learning and artificial intelligence have permeated daily life, with time series prediction emerging as a focal area of research due to its significance in data analysis. The evolution of deep learning methods for time series prediction has progressed from
[...] Read more.
The applications of deep learning and artificial intelligence have permeated daily life, with time series prediction emerging as a focal area of research due to its significance in data analysis. The evolution of deep learning methods for time series prediction has progressed from the Convolutional Neural Network (CNN) and the Recurrent Neural Network (RNN) to the recently popularized Transformer network. However, each of these methods has encountered specific issues. Recent studies have questioned the effectiveness of the self-attention mechanism in Transformers for time series prediction, prompting a reevaluation of approaches to LTSF (Long Time Series Forecasting) problems. To circumvent the limitations present in current models, this paper introduces a novel hybrid network, Temporal Convolutional Network-Linear (TCN-Linear), which leverages the temporal prediction capabilities of the Temporal Convolutional Network (TCN) to enhance the capacity of LSTF-Linear. Time series from three classical chaotic systems (Lorenz, Mackey–Glass, and Rossler) and real-world stock data serve as experimental datasets. Numerical simulation results indicate that, compared to classical networks and novel hybrid models, our model achieves the lowest RMSE, MAE, and MSE with the fewest training parameters, and its R2 value is the closest to 1.
Full article
(This article belongs to the Section Signal and Data Analysis)
►▼
Show Figures
Figure 1
Open AccessArticle
MV–MR: Multi-Views and Multi-Representations for Self-Supervised Learning and Knowledge Distillation
by
Vitaliy Kinakh, Mariia Drozdova and Slava Voloshynovskiy
Entropy 2024, 26(6), 466; https://doi.org/10.3390/e26060466 (registering DOI) - 29 May 2024
Abstract
We present a new method of self-supervised learning and knowledge distillation based on multi-views and multi-representations (MV–MR). MV–MR is based on the maximization of dependence between learnable embeddings from augmented and non-augmented views, jointly with the maximization of dependence between learnable embeddings from
[...] Read more.
We present a new method of self-supervised learning and knowledge distillation based on multi-views and multi-representations (MV–MR). MV–MR is based on the maximization of dependence between learnable embeddings from augmented and non-augmented views, jointly with the maximization of dependence between learnable embeddings from the augmented view and multiple non-learnable representations from the non-augmented view. We show that the proposed method can be used for efficient self-supervised classification and model-agnostic knowledge distillation. Unlike other self-supervised techniques, our approach does not use any contrastive learning, clustering, or stop gradients. MV–MR is a generic framework allowing the incorporation of constraints on the learnable embeddings via the usage of image multi-representations as regularizers. The proposed method is used for knowledge distillation. MV–MR provides state-of-the-art self-supervised performance on the STL10 and CIFAR20 datasets in a linear evaluation setup. We show that a low-complexity ResNet50 model pretrained using proposed knowledge distillation based on the CLIP ViT model achieves state-of-the-art performance on STL10 and CIFAR100 datasets.
Full article
(This article belongs to the Special Issue Information Theory for Interpretable Machine Learning)
►▼
Show Figures
Figure 1
Open AccessArticle
Analysis of Vibration Characteristics of Bridge Structures under Seismic Excitation
by
Ling’ai Li and Shengxiang Huang
Entropy 2024, 26(6), 465; https://doi.org/10.3390/e26060465 (registering DOI) - 29 May 2024
Abstract
Bridges may undergo structural vibration responses when exposed to seismic waves. An analysis of structural vibration characteristics is essential for evaluating the safety and stability of a bridge. In this paper, a signal time-frequency feature extraction method (NTFT-ESVD) integrating standard time-frequency transformation, singular
[...] Read more.
Bridges may undergo structural vibration responses when exposed to seismic waves. An analysis of structural vibration characteristics is essential for evaluating the safety and stability of a bridge. In this paper, a signal time-frequency feature extraction method (NTFT-ESVD) integrating standard time-frequency transformation, singular value decomposition, and information entropy is proposed to analyze the vibration characteristics of structures under seismic excitation. First, the experiment simulates the response signal of the structure when exposed to seismic waves. The results of the time-frequency analysis indicate a maximum relative error of only 1% in frequency detection, and the maximum relative errors in amplitude and time parameters are 5.9% and 6%, respectively. These simulation results demonstrate the reliability of the NTFT-ESVD method in extracting the time-frequency characteristics of the signal and its suitability for analyzing the seismic response of the structure. Then, a real seismic wave event of the Su-Tong Yangtze River Bridge during the Hengchun earthquake in Taiwan (2006) is analyzed. The results show that the seismic waves only have a short-term impact on the bridge, with the maximum amplitude of the vibration response no greater than 1 cm, and the maximum vibration frequency no greater than 0.2 Hz in the three-dimensional direction, indicating that the earthquake in Hengchun will not have any serious impact on the stability and security of the Su-Tong Yangtze River Bridge. Additionally, the reliability of determining the arrival time of seismic waves by extracting the time-frequency information from structural vibration response signals is validated by comparing it with results from seismic stations (SSE/WHN/QZN) at similar epicenter distances published by the USGS. The results of the case study show that the combination of dynamic GNSS monitoring technology and time-frequency analysis can be used to analyze the impact of seismic waves on the bridge, which is of great help to the manager in assessing structural seismic damage.
Full article
(This article belongs to the Section Multidisciplinary Applications)
►▼
Show Figures
Figure 1
Open AccessArticle
Viewpoint Selection for 3D-Games with f-Divergences
by
Micaela Y. Martin, Mateu Sbert and Miguel Chover
Entropy 2024, 26(6), 464; https://doi.org/10.3390/e26060464 (registering DOI) - 29 May 2024
Abstract
In this paper, we present a novel approach for the optimal camera selection in video games. The new approach explores the use of information theoretic metrics f-divergences, to measure the correlation between the objects as viewed in camera frustum and the ideal or
[...] Read more.
In this paper, we present a novel approach for the optimal camera selection in video games. The new approach explores the use of information theoretic metrics f-divergences, to measure the correlation between the objects as viewed in camera frustum and the ideal or target view. The f-divergences considered are the Kullback–Leibler divergence or relative entropy, the total variation and the divergence. Shannon entropy is also used for comparison purposes. The visibility is measured using the differential form factors from the camera to objects and is computed by casting rays with importance sampling Monte Carlo. Our method allows a very fast dynamic selection of the best viewpoints, which can take into account changes in the scene, in the ideal or target view, and in the objectives of the game. Our prototype is implemented in Unity engine, and our results show an efficient selection of the camera and an improved visual quality. The most discriminating results are obtained with the use of Kullback–Leibler divergence.
Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
►▼
Show Figures
Figure 1
Open AccessArticle
Entropy Production in Reaction–Diffusion Systems Confined in Narrow Channels
by
Guillermo Chacón-Acosta and Mayra Núñez-López
Entropy 2024, 26(6), 463; https://doi.org/10.3390/e26060463 (registering DOI) - 29 May 2024
Abstract
This work analyzes the effect of wall geometry when a reaction–diffusion system is confined to a narrow channel. In particular, we study the entropy production density in the reversible Gray–Scott system. Using an effective diffusion equation that considers modifications by the channel characteristics,
[...] Read more.
This work analyzes the effect of wall geometry when a reaction–diffusion system is confined to a narrow channel. In particular, we study the entropy production density in the reversible Gray–Scott system. Using an effective diffusion equation that considers modifications by the channel characteristics, we find that the entropy density changes its value but not its qualitative behavior, which helps explore the structure-formation space.
Full article
(This article belongs to the Special Issue Random Walks and Stochastic Processes in Complex Systems: From Physics to Socio-Economic Phenomena)
►▼
Show Figures
Figure 1
Open AccessArticle
High-Throughput Polar Code Decoders with Information Bottleneck Quantization
by
Claus Kestel, Lucas Johannsen and Norbert Wehn
Entropy 2024, 26(6), 462; https://doi.org/10.3390/e26060462 - 28 May 2024
Abstract
In digital baseband processing, the forward error correction (FEC) unit belongs to the most demanding components in terms of computational complexity and power consumption. Hence, efficient implementation of FEC decoders is crucial for next-generation mobile broadband standards and an ongoing research topic. Quantization
[...] Read more.
In digital baseband processing, the forward error correction (FEC) unit belongs to the most demanding components in terms of computational complexity and power consumption. Hence, efficient implementation of FEC decoders is crucial for next-generation mobile broadband standards and an ongoing research topic. Quantization has a significant impact on the decoder area, power consumption and throughput. Thus, lower bit widths are preferred for efficient implementations but degrade the error correction capability. To address this issue, a non-uniform quantization based on the Information Bottleneck (IB) method is proposed that enables a low bit width while maintaining the essential information. Many investigations on the use of the IB method for Low-density parity-check code) LDPC decoders exist and have shown its advantages from an implementation perspective. However, for polar code decoder implementations, there exists only one publication that is not based on the state-of-the-art Fast Simplified Successive-Cancellation (Fast-SSC) decoding algorithm, and only synthesis implementation results without energy estimation are shown. In contrast, our paper presents several optimized Fast-SSC polar code decoder implementations using IB-based quantization with placement and routing results using advanced 12 nm FinFET technology. Gains of up to 16% in area and 13% in energy efficiency are achieved with IB-based quantization at a Frame Error Rate (FER) of and a polar code of compared to state-of-the-art decoders.
Full article
(This article belongs to the Special Issue Intelligent Information Processing and Coding for B5G Communications)
Open AccessArticle
Leveraging Data Locality in Quantum Convolutional Classifiers
by
Mingyoung Jeng, Alvir Nobel, Vinayak Jha, David Levy, Dylan Kneidel, Manu Chaudhary, Ishraq Islam, Audrey Facer, Manish Singh, Evan Baumgartner, Eade Vanderhoof, Abina Arshad and Esam El-Araby
Entropy 2024, 26(6), 461; https://doi.org/10.3390/e26060461 - 28 May 2024
Abstract
Quantum computing (QC) has opened the door to advancements in machine learning (ML) tasks that are currently implemented in the classical domain. Convolutional neural networks (CNNs) are classical ML architectures that exploit data locality and possess a simpler structure than a fully connected
[...] Read more.
Quantum computing (QC) has opened the door to advancements in machine learning (ML) tasks that are currently implemented in the classical domain. Convolutional neural networks (CNNs) are classical ML architectures that exploit data locality and possess a simpler structure than a fully connected multi-layer perceptrons (MLPs) without compromising the accuracy of classification. However, the concept of preserving data locality is usually overlooked in the existing quantum counterparts of CNNs, particularly for extracting multifeatures in multidimensional data. In this paper, we present an multidimensional quantum convolutional classifier (MQCC) that performs multidimensional and multifeature quantum convolution with average and Euclidean pooling, thus adapting the CNN structure to a variational quantum algorithm (VQA). The experimental work was conducted using multidimensional data to validate the correctness and demonstrate the scalability of the proposed method utilizing both noisy and noise-free quantum simulations. We evaluated the MQCC model with reference to reported work on state-of-the-art quantum simulators from IBM Quantum and Xanadu using a variety of standard ML datasets. The experimental results show the favorable characteristics of our proposed techniques compared with existing work with respect to a number of quantitative metrics, such as the number of training parameters, cross-entropy loss, classification accuracy, circuit depth, and quantum gate count.
Full article
(This article belongs to the Special Issue Quantum Computation, Communication and Cryptography)
Open AccessArticle
Testing the Conjecture That Quantum Processes Create Conscious Experience
by
Hartmut Neven, Adam Zalcman, Peter Read, Kenneth S. Kosik, Tjitse van der Molen, Dirk Bouwmeester, Eve Bodnia, Luca Turin and Christof Koch
Entropy 2024, 26(6), 460; https://doi.org/10.3390/e26060460 (registering DOI) - 28 May 2024
Abstract
The question of what generates conscious experience has mesmerized thinkers since the dawn of humanity, yet its origins remain a mystery. The topic of consciousness has gained traction in recent years, thanks to the development of large language models that now arguably pass
[...] Read more.
The question of what generates conscious experience has mesmerized thinkers since the dawn of humanity, yet its origins remain a mystery. The topic of consciousness has gained traction in recent years, thanks to the development of large language models that now arguably pass the Turing test, an operational test for intelligence. However, intelligence and consciousness are not related in obvious ways, as anyone who suffers from a bad toothache can attest—pain generates intense feelings and absorbs all our conscious awareness, yet nothing particularly intelligent is going on. In the hard sciences, this topic is frequently met with skepticism because, to date, no protocol to measure the content or intensity of conscious experiences in an observer-independent manner has been agreed upon. Here, we present a novel proposal: Conscious experience arises whenever a quantum mechanical superposition forms. Our proposal has several implications: First, it suggests that the structure of the superposition determines the qualia of the experience. Second, quantum entanglement naturally solves the binding problem, ensuring the unity of phenomenal experience. Finally, a moment of agency may coincide with the formation of a superposition state. We outline a research program to experimentally test our conjecture via a sequence of quantum biology experiments. Applying these ideas opens up the possibility of expanding human conscious experience through brain–quantum computer interfaces.
Full article
(This article belongs to the Section Quantum Information)
►▼
Show Figures
Figure 1
Open AccessArticle
Ising’s Roots and the Transfer-Matrix Eigenvalues
by
Reinhard Folk and Yurij Holovatch
Entropy 2024, 26(6), 459; https://doi.org/10.3390/e26060459 - 28 May 2024
Abstract
Today, the Ising model is an archetype describing collective ordering processes. As such, it is widely known in physics and far beyond. Less known is the fact that the thesis defended by Ernst Ising 100 years ago (in 1924) contained not only the
[...] Read more.
Today, the Ising model is an archetype describing collective ordering processes. As such, it is widely known in physics and far beyond. Less known is the fact that the thesis defended by Ernst Ising 100 years ago (in 1924) contained not only the solution of what we call now the ‘classical 1D Ising model’ but also other problems. Some of these problems, as well as the method of their solution, are the subject of this note. In particular, we discuss the combinatorial method Ernst Ising used to calculate the partition function for a chain of elementary magnets. In the thermodynamic limit, this method leads to the result that the partition function is given by the roots of a certain polynomial. We explicitly show that ‘Ising’s roots’ that arise within the combinatorial treatment are also recovered by the eigenvalues of the transfer matrix, a concept that was introduced much later. Moreover, we discuss the generalization of the two-state model to a three-state one presented in Ising’s thesis, which is not included in his famous paper of 1925 (E. Ising, Z. Physik 31 (1925) 253). The latter model can be considered as a forerunner of the now-abundant models with many-component order parameters.
Full article
(This article belongs to the Section Statistical Physics)
►▼
Show Figures
Figure 1
Open AccessArticle
Embedded Complexity of Evolutionary Sequences
by
Jonathan D. Phillips
Entropy 2024, 26(6), 458; https://doi.org/10.3390/e26060458 - 28 May 2024
Abstract
Multiple pathways and outcomes are common in evolutionary sequences for biological and other environmental systems due to nonlinear complexity, historical contingency, and disturbances. From any starting point, multiple evolutionary pathways are possible. From an endpoint or observed state, multiple possibilities exist for the
[...] Read more.
Multiple pathways and outcomes are common in evolutionary sequences for biological and other environmental systems due to nonlinear complexity, historical contingency, and disturbances. From any starting point, multiple evolutionary pathways are possible. From an endpoint or observed state, multiple possibilities exist for the sequence of events that created it. However, for any observed historical sequence—e.g., ecological or soil chronosequences, stratigraphic records, or lineages—only one historical sequence actually occurred. Here, a measure of the embedded complexity of historical sequences based on algebraic graph theory is introduced. Sequences are represented as system states S(t), such that S(t − 1) ≠ S(t) ≠ S(t + 1). Each sequence of N states contains nested subgraph sequences of length 2, 3, …, N − 1. The embedded complexity index (which can also be interpreted in terms of embedded information) compares the complexity (based on the spectral radius λ1) of the entire sequence to the cumulative complexity of the constituent subsequences. The spectral radius is closely linked to graph entropy, so the index also reflects information in the sequence. The analysis is also applied to ecological state-and-transition models (STM), which represent observed transitions, along with information on their causes or triggers. As historical sequences are lengthened (by the passage of time and additional transitions or by improved resolutions or new observations of historical changes), the overall complexity asymptotically approaches λ1 = 2, while the embedded complexity increases as N2.6. Four case studies are presented, representing coastal benthic community shifts determined from biostratigraphy, ecological succession on glacial forelands, vegetation community changes in longleaf pine woodlands, and habitat changes in a delta.
Full article
(This article belongs to the Special Issue Entropy and Information in Biological Systems)
►▼
Show Figures
Figure 1
Open AccessArticle
Geometrothermodynamics of 3D Regular Black Holes
by
Nurzada Beissen
Entropy 2024, 26(6), 457; https://doi.org/10.3390/e26060457 - 28 May 2024
Abstract
We investigate a spherically symmetric exact solution of Einstein’s gravity with cosmological constant in (2 + 1) dimensions, non-minimally coupled to a scalar field. The solution describes the gravitational field of a black hole, which is free of curvature singularities in the entire
[...] Read more.
We investigate a spherically symmetric exact solution of Einstein’s gravity with cosmological constant in (2 + 1) dimensions, non-minimally coupled to a scalar field. The solution describes the gravitational field of a black hole, which is free of curvature singularities in the entire spacetime. We use the formalism of geometrothermodynamics to investigate the geometric properties of the corresponding space of equilibrium states and find their interpretation from the point of view of thermodynamics. It turns out that, as a result of the presence of thermodynamic interaction, the space of equilibrium states is curved with two possible configurations, which depend on the value of a coupling constant. In the first case, the equilibrium space is completely regular, corresponding to a stable thermodynamic system. The second case is characterized by the presence of two curvature singularities, which are shown to correspond to locations where the system undergoes two different phase transitions, one due to the breakdown of the thermodynamic stability condition and the second one due to the presence of a divergence at the level of the response functions.
Full article
(This article belongs to the Special Issue Geometrothermodynamics and Its Applications)
►▼
Show Figures
Figure 1
Open AccessArticle
The Interplay between Tunneling and Parity Violation in Chiral Molecules
by
Daniel Martínez-Gil, Pedro Bargueño and Salvador Miret-Artés
Entropy 2024, 26(6), 456; https://doi.org/10.3390/e26060456 - 27 May 2024
Abstract
In this review, the concepts of quantum tunneling and parity violation are introduced in the context of chiral molecules. A particle moving in a double well potential provides a good model to study the behavior of chiral molecules, where the left well and
[...] Read more.
In this review, the concepts of quantum tunneling and parity violation are introduced in the context of chiral molecules. A particle moving in a double well potential provides a good model to study the behavior of chiral molecules, where the left well and right well represent the L and R enantiomers, respectively. If the model considers the quantum behavior of matter, the concept of quantum tunneling emerges, giving place to stereomutation dynamics between left- and right-handed chiral molecules. Parity-violating interactions, like the electroweak one, can be also considered, making possible the existence of an energy difference between the L and R enantiomers, the so-called parity-violating energy difference (PVED). Here we provide a brief account of some theoretical methods usually employed to calculate this PVED, also commenting on relevant experiments devoted to experimentally detect the aforementioned PVED in chiral molecules. Finally, we comment on some ways of solving the so-called Hund’s paradox, with emphasis on mean-field theory and decoherence.
Full article
(This article belongs to the Special Issue Tunneling in Complex Systems)
Open AccessArticle
Linear Codes Constructed from Two Weakly Regular Plateaued Functions with Index (p − 1)/2
by
Shudi Yang, Tonghui Zhang and Zheng-an Yao
Entropy 2024, 26(6), 455; https://doi.org/10.3390/e26060455 - 27 May 2024
Abstract
Linear codes are the most important family of codes in cryptography and coding theory. Some codes only have a few weights and are widely used in many areas, such as authentication codes, secret sharing schemes and strongly regular graphs. By setting
[...] Read more.
Linear codes are the most important family of codes in cryptography and coding theory. Some codes only have a few weights and are widely used in many areas, such as authentication codes, secret sharing schemes and strongly regular graphs. By setting , we constructed an infinite family of linear codes using two distinct weakly regular unbalanced (and balanced) plateaued functions with index . Their weight distributions were completely determined by applying exponential sums and Walsh transform. As a result, most of our constructed codes have a few nonzero weights and are minimal.
Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Open AccessArticle
Systems and Methods for Transformation and Degradation Analysis
by
Jude A. Osara and Michael D. Bryant
Entropy 2024, 26(6), 454; https://doi.org/10.3390/e26060454 - 27 May 2024
Abstract
Modern concepts in irreversible thermodynamics are applied to system transformation and degradation analyses. Phenomenological entropy generation (PEG) theorem is combined with the Degradation-Entropy Generation (DEG) theorem for instantaneous multi-disciplinary, multi-scale, multi-component system characterization. A transformation-PEG theorem and space materialize with system and process
[...] Read more.
Modern concepts in irreversible thermodynamics are applied to system transformation and degradation analyses. Phenomenological entropy generation (PEG) theorem is combined with the Degradation-Entropy Generation (DEG) theorem for instantaneous multi-disciplinary, multi-scale, multi-component system characterization. A transformation-PEG theorem and space materialize with system and process defining elements and dimensions. The near-100% accurate, consistent results and features in recent publications demonstrating and applying the new TPEG methods to frictional wear, grease aging, electrochemical power system cycling—including lithium-ion battery thermal runaway—metal fatigue loading and pump flow are collated herein, demonstrating the practicality of the new and universal PEG theorem and the predictive power of models that combine and utilize both theorems. The methodology is useful for design, analysis, prognostics, diagnostics, maintenance and optimization.
Full article
(This article belongs to the Special Issue Trends in the Second Law of Thermodynamics)
►▼
Show Figures
Figure 1
Open AccessArticle
Combining Exergy and Pinch Analysis for the Operating Mode Optimization of a Steam Turbine Cogeneration Plant in Wonji-Shoa, Ethiopia
by
Shumet Sendek Sharew, Alessandro Di Pretoro, Abubeker Yimam, Stéphane Negny and Ludovic Montastruc
Entropy 2024, 26(6), 453; https://doi.org/10.3390/e26060453 - 27 May 2024
Abstract
In this research, the simulation of an existing 31.5 MW steam power plant, providing both electricity for the national grid and hot utility for the related sugar factory, was performed by means of ProSimPlus® v. 3.7.6. The purpose of this study is
[...] Read more.
In this research, the simulation of an existing 31.5 MW steam power plant, providing both electricity for the national grid and hot utility for the related sugar factory, was performed by means of ProSimPlus® v. 3.7.6. The purpose of this study is to analyze the steam turbine operating parameters by means of the exergy concept with a pinch-based technique in order to assess the overall energy performance and losses that occur in the power plant. The combined pinch and exergy analysis (CPEA) initially focuses on the depiction of the hot and cold composite curves (HCCCs) of the steam cycle to evaluate the energy and exergy requirements. Based on the minimal approach temperature difference (∆Tlm) required for effective heat transfer, the exergy loss that raises the heat demand (heat duty) for power generation can be quantitatively assessed. The exergy composite curves focus on the potential for fuel saving throughout the cycle with respect to three possible operating modes and evaluates opportunities for heat pumping in the process. Well-established tools, such as balanced exergy composite curves, are used to visualize exergy losses in each process unit and utility heat exchangers. The outcome of the combined exergy–pinch analysis reveals that energy savings of up to 83.44 MW may be realized by lowering exergy destruction in the cogeneration plant according to the operating scenario.
Full article
(This article belongs to the Special Issue Thermodynamic Optimization of Industrial Energy Systems)
►▼
Show Figures
Figure 1
Open AccessArticle
Extended Regression Analysis for Debye–Einstein Models Describing Low Temperature Heat Capacity Data of Solids
by
Ernst Gamsjäger and Manfred Wiessner
Entropy 2024, 26(6), 452; https://doi.org/10.3390/e26060452 - 26 May 2024
Abstract
Heat capacity data of many crystalline solids can be described in a physically sound manner by Debye–Einstein integrals in the temperature range from to . The parameters of the Debye–Einstein approach are either obtained by a Markov chain Monte
[...] Read more.
Heat capacity data of many crystalline solids can be described in a physically sound manner by Debye–Einstein integrals in the temperature range from to . The parameters of the Debye–Einstein approach are either obtained by a Markov chain Monte Carlo (MCMC) global optimization method or by a Levenberg–Marquardt (LM) local optimization routine. In the case of the MCMC approach the model parameters and the coefficients of a function describing the residuals of the measurement points are simultaneously optimized. Thereby, the Bayesian credible interval for the heat capacity function is obtained. Although both regression tools (LM and MCMC) are completely different approaches, not only the values of the Debye–Einstein parameters, but also their standard errors appear to be similar. The calculated model parameters and their associated standard errors are then used to derive the enthalpy, entropy and Gibbs energy as functions of temperature. By direct insertion of the MCMC parameters of all computer runs the distributions of the integral quantities enthalpy, entropy and Gibbs energy are determined.
Full article
(This article belongs to the Special Issue Computational Thermodynamics and Its Applications)
Open AccessArticle
An Adaptive Sampling Algorithm with Dynamic Iterative Probability Adjustment Incorporating Positional Information
by
Yanbing Liu, Liping Chen, Yu Chen and Jianwan Ding
Entropy 2024, 26(6), 451; https://doi.org/10.3390/e26060451 (registering DOI) - 26 May 2024
Abstract
Physics-informed neural networks (PINNs) have garnered widespread use for solving a variety of complex partial differential equations (PDEs). Nevertheless, when addressing certain specific problem types, traditional sampling algorithms still reveal deficiencies in efficiency and precision. In response, this paper builds upon the progress
[...] Read more.
Physics-informed neural networks (PINNs) have garnered widespread use for solving a variety of complex partial differential equations (PDEs). Nevertheless, when addressing certain specific problem types, traditional sampling algorithms still reveal deficiencies in efficiency and precision. In response, this paper builds upon the progress of adaptive sampling techniques, addressing the inadequacy of existing algorithms to fully leverage the spatial location information of sample points, and introduces an innovative adaptive sampling method. This approach incorporates the Dual Inverse Distance Weighting (DIDW) algorithm, embedding the spatial characteristics of sampling points within the probability sampling process. Furthermore, it introduces reward factors derived from reinforcement learning principles to dynamically refine the probability sampling formula. This strategy more effectively captures the essential characteristics of PDEs with each iteration. We utilize sparsely connected networks and have adjusted the sampling process, which has proven to effectively reduce the training time. In numerical experiments on fluid mechanics problems, such as the two-dimensional Burgers’ equation with sharp solutions, pipe flow, flow around a circular cylinder, lid-driven cavity flow, and Kovasznay flow, our proposed adaptive sampling algorithm markedly enhances accuracy over conventional PINN methods, validating the algorithm’s efficacy.
Full article
(This article belongs to the Special Issue Physics-Informed Neural Networks)
►▼
Show Figures
Figure 1
Open AccessArticle
A Direct Entropic Approach to the Thermal Balance of Spontaneous Chemical Reactions
by
Michele D’Anna, Paolo Lubini, Hans U. Fuchs and Federico Corni
Entropy 2024, 26(6), 450; https://doi.org/10.3390/e26060450 - 26 May 2024
Abstract
When working with, and learning about, the thermal balance of a chemical reaction, we need to consider two overlapping but conceptually distinct aspects: one relates to the process of reallocating entropy between reactants and products (because of different specific entropies of the new
[...] Read more.
When working with, and learning about, the thermal balance of a chemical reaction, we need to consider two overlapping but conceptually distinct aspects: one relates to the process of reallocating entropy between reactants and products (because of different specific entropies of the new substances compared to those of the old), and the other to dissipative processes. Together, they determine how much entropy is exchanged between the chemicals and their environment (i.e., in heating and cooling). By making explicit use of (a) the two conjugate pairs chemical amount (i.e., amount of substance) and chemical potential, and entropy and temperature, respectively, (b) the laws of balance of amount of substance on the one hand and entropy on the other, and (c) a generalized approach to the energy principle, it is possible to create both imaginative and formal conceptual tools for modeling thermal balances associated with chemical transformations in general and exothermic and endothermic reactions in particular. In this paper, we outline the concepts and relations needed for a direct approach to chemical and thermal dynamics, create a model of exothermic and endothermic reactions, including numerical examples, and discuss how to relate the direct entropic approach to traditional models of these phenomena.
Full article
(This article belongs to the Section Thermodynamics)
Open AccessArticle
Interference of Particles with Fermionic Internal Degrees of Freedom
by
Jerzy Dajka
Entropy 2024, 26(6), 449; https://doi.org/10.3390/e26060449 - 26 May 2024
Abstract
The interference of fermionic particles, specifically molecules comprising a small number of fermions, in a Mach–Zehnder interferometer is being investigated under the influence of both classical and non-classical external controls. The aim is to identify control strategies that can elucidate the relationship between
[...] Read more.
The interference of fermionic particles, specifically molecules comprising a small number of fermions, in a Mach–Zehnder interferometer is being investigated under the influence of both classical and non-classical external controls. The aim is to identify control strategies that can elucidate the relationship between the interference pattern and the characteristics of internal fermion–fermion interactions.
Full article
(This article belongs to the Special Issue Matter-Aggregating Systems at a Classical vs. Quantum Interface)
►▼
Show Figures
Figure 1
Journal Menu
► ▼ Journal Menu-
- Entropy Home
- Aims & Scope
- Editorial Board
- Reviewer Board
- Topical Advisory Panel
- Video Exhibition
- Instructions for Authors
- Special Issues
- Topics
- Sections & Collections
- Article Processing Charge
- Indexing & Archiving
- Editor’s Choice Articles
- Most Cited & Viewed
- Journal Statistics
- Journal History
- Journal Awards
- Society Collaborations
- Conferences
- Editorial Office
Journal Browser
► ▼ Journal Browser-
arrow_forward_ios
Forthcoming issue
arrow_forward_ios Current issue - Vol. 26 (2024)
- Vol. 25 (2023)
- Vol. 24 (2022)
- Vol. 23 (2021)
- Vol. 22 (2020)
- Vol. 21 (2019)
- Vol. 20 (2018)
- Vol. 19 (2017)
- Vol. 18 (2016)
- Vol. 17 (2015)
- Vol. 16 (2014)
- Vol. 15 (2013)
- Vol. 14 (2012)
- Vol. 13 (2011)
- Vol. 12 (2010)
- Vol. 11 (2009)
- Vol. 10 (2008)
- Vol. 9 (2007)
- Vol. 8 (2006)
- Vol. 7 (2005)
- Vol. 6 (2004)
- Vol. 5 (2003)
- Vol. 4 (2002)
- Vol. 3 (2001)
- Vol. 2 (2000)
- Vol. 1 (1999)
Highly Accessed Articles
Latest Books
E-Mail Alert
News
Topics
Topic in
Algorithms, Diagnostics, Entropy, Information, J. Imaging
Application of Machine Learning in Molecular Imaging
Topic Editors: Allegra Conti, Nicola Toschi, Marianna Inglese, Andrea Duggento, Matthew Grech-Sollars, Serena Monti, Giancarlo Sportelli, Pietro CarraDeadline: 31 May 2024
Topic in
Education Sciences, Entropy, JAL, Societies, Sustainability
Sustainability in Aging and Depopulation Societies
Topic Editors: Shiro Horiuchi, Gregor Wolbring, Takeshi MatsudaDeadline: 15 June 2024
Topic in
Buildings, Energies, Entropy, Resources, Sustainability
Advances in Solar Heating and Cooling
Topic Editors: Salvatore Vasta, Sotirios Karellas, Marina Bonomolo, Alessio Sapienza, Uli JakobDeadline: 30 June 2024
Topic in
Actuators, Applied Sciences, Entropy
Thermodynamics and Heat Transfers in Vacuum Tube Trains (Hyperloop)
Topic Editors: Suyong Choi, Minki Cho, Jungyoul LimDeadline: 30 July 2024
Conferences
22–26 November 2024
2024 International Conference on Science and Engineering of Electronics (ICSEE'2024)
28–31 May 2024
XXII Conference on Non-equilibrium Statistical Mechanics and Nonlinear Physics—MEDYFINOL 2024
Special Issues
Special Issue in
Entropy
Entropy, Statistical Evidence, and Scientific Inference: Evidence Functions in Theory and Applications
Guest Editors: Brian Dennis, Mark L. Taper, Jose Miguel PoncianoDeadline: 31 May 2024
Special Issue in
Entropy
Nonlinear Dynamics in Cardiovascular Signals
Guest Editor: Claudia LermaDeadline: 15 June 2024
Special Issue in
Entropy
Non-equilibrium Thermodynamics
Guest Editors: Duc Nguyen-Manh, Abraham MarmurDeadline: 30 June 2024
Special Issue in
Entropy
Information Theory for MIMO Systems
Guest Editors: Lin Zhou, Lin BaiDeadline: 15 July 2024
Topical Collections
Topical Collection in
Entropy
Algorithmic Information Dynamics: A Computational Approach to Causality from Cells to Networks
Collection Editors: Hector Zenil, Felipe Abrahão
Topical Collection in
Entropy
Wavelets, Fractals and Information Theory
Collection Editor: Carlo Cattani
Topical Collection in
Entropy
Entropy in Image Analysis
Collection Editor: Amelia Carolina Sparavigna