Large-scale optimization of nonconvex MINLP refinery scheduling

Robert E. Franzoi, Jeffrey D. Kelly, Jorge A. W. Gut, Ignacio E. Grossmann, Brenno Castrillon Menezes

    Research output: Contribution to journalArticlepeer-review

    3 Citations (Scopus)

    Abstract

    Modeling and optimization of large-scale refinery scheduling problems is challenging because of their complexity and size. Herein, we propose a mathematical model to represent such problems more accurately and realistically, and a state-of-the-art optimization framework for its solution. The framework leverages the use of mathematical optimization and algorithmic methods by combining modeling approaches (process design, model decompositions), solving strategies (rescheduling, heuristics), and machine learning regression (surrogate models). An industrial-size refinery scheduling problem (2 blenders, 4 feed tanks, distillation network with 5 towers, processing network with FCC, hydrotreaters, debutanizers, superfractionator, catalytic reformer) is formulated as a hierarchical nonconvex mixed-integer nonlinear programming (MINLP) model and is successfully optimized, providing higher profitability and more efficient scheduling operations considering 12 feedstocks, 10 products and multiple scenarios for time horizon and step. Results highlight the importance of tuning scheduling parameters and employing an enhanced computer-aided framework to enable the solution of industrial refinery scheduling operations.
    Original languageEnglish
    Article number108678
    Number of pages17
    JournalComputers and Chemical Engineering
    Volume186
    Early online dateApr 2024
    DOIs
    Publication statusPublished - Jul 2024

    Keywords

    • Decision-making framework
    • Heuristic algorithms
    • Mathematical programming
    • Nonconvex MINLP
    • Optimization
    • Refinery scheduling

    Fingerprint

    Dive into the research topics of 'Large-scale optimization of nonconvex MINLP refinery scheduling'. Together they form a unique fingerprint.

    Cite this