Vivek Oommen

Orcid: 0000-0003-4363-6896

According to our database1, Vivek Oommen authored at least 15 papers between 2022 and 2026.

Collaborative distances:
  • Dijkstra number2 of five.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2026
Mitigating spectral bias in neural operators via high-frequency scaling for physical systems.
Neural Networks, 2026

2025
Importance of localized dilatation and distensibility in identifying determinants of thoracic aortic aneurysm with neural operators.
CoRR, September, 2025

A Variational Framework for Residual-Based Adaptivity in Neural PDE Solvers and Operator Learning.
CoRR, September, 2025

Learning Turbulent Flows with Generative Models: Super-resolution, Forecasting, and Sparse Flow Reconstruction.
CoRR, September, 2025

Equilibrium Conserving Neural Operators for Super-Resolution Learning.
CoRR, April, 2025

XAI4Extremes: An interpretable machine learning framework for understanding extreme-weather precursors under climate change.
CoRR, March, 2025

2024
Deep neural operators as accurate surrogates for shape optimization.
Eng. Appl. Artif. Intell., 2024

From PINNs to PIKANs: Recent Advances in Physics-Informed Machine Learning.
CoRR, 2024

Integrating Neural Operators with Diffusion Models Improves Spectral Representation in Turbulence Modeling.
CoRR, 2024

RiemannONets: Interpretable Neural Operators for Riemann Problems.
CoRR, 2024

2023
Rethinking materials simulations: Blending direct numerical simulations with neural operators.
CoRR, 2023

GPT vs Human for Scientific Reviews: A Dual Source Review on Applications of ChatGPT in Science.
CoRR, 2023

Deep neural operators can serve as accurate surrogates for shape optimization: A case study for airfoils.
CoRR, 2023

2022
Solving Inverse Heat Transfer Problems Without Surrogate Models: A Fast, Data-Sparse, Physics Informed Neural Network Approach.
J. Comput. Inf. Sci. Eng., 2022

Learning two-phase microstructure evolution using neural operators and autoencoder architectures.
CoRR, 2022


  Loading...