Introduction to the discussion on the probabilistic framework, focusing on concepts rather than detailed numbers.
The session will have various presenters, including engineers, to cover topics discussed last year.
Focus is on ECRS and non-spin services under the probabilistic approach, not on regulation, RRS, and frequency.
A recap of last year's AS study methodology and its developments to apply probabilistic modeling for AS quantities.
Discussion on the rationale for selecting specific services like ECRS and non-spin for the probabilistic approach.
Importance of considering inertia and RRS in future discussions for grid reliability as recommended by attendees.
Highlights from the PUC's findings on AS methodology, emphasizing the need for dynamic determination of AS.
Development status of tools for the AS methodology evolution, targeting the transition to a dynamic approach in 2026.
Open questions surrounding the probabilistic framework's integration, such as handling available headroom, growth of renewable energy, temporal constraints, and understanding the framework's criteria.
Consideration of PFR benefits of reserves raised as an important discussion point related to AS settings.
Riaz Khan, a senior engineer at ERCOT, discussed the technique of Monte Carlo Sample Boosting to manage limited data samples.
The technique is applied to boost small sample sizes, such as transforming three years of data into an analysis similar to a ten-year dataset.
Monte Carlo Resampling, a statistical technique, is used to estimate distribution statistics like mean and standard deviation by creating copies of the data with replacement.
The process involves calculating new samples, then using the mean and standard deviation to normalize data and create smoother data distributions.
The boosting technique has been tested on both normal and beta distributions to check its efficacy in mitigating data "choppiness".
Concerns were raised about the technique possibly leading to less extreme data points, especially when significant events such as outages or extreme weather occur.
Questions about how the technique maintains probabilities of extreme events led to clarifying that the volatility could be reduced but not the ability to predict them.
Clarifications were provided that the new samples are created around regression lines, matching empirical distributions with a focus on maintaining original distribution properties.
The technique will initially apply to inputs like wind, solar, net load forecast errors, looking for ways to manage and optimize data while avoiding potential biases.
Participants expressed concerns that, while smoothing data can provide a more consistent dataset, it may also lead to the dilution of real-world variability and extremity in the dataset.
Discussion on the adjacent hours concept and clustering techniques to increase sample size for each hour.
More samples lead to more accurate data and better representation of system conditions at a specific hour.
Utilization of limited samples by incorporating adjacent hours in optimizations.
Analysis shows significant correlation between net load forecast errors between adjacent hours.
Weighted-based approach includes adjacent hours to enlarge the original sample set for Monte Carlo distribution.
Adjacent hours are prioritized based on proximity to the target hour, e.g., if the target is 12 PM, 11 AM and 1 PM are prioritized over further hours.
Methodology helps produce smoother transitions in requirements, avoiding drastic changes in demand between hours.
The approach is guided by data indicating that a high forecast error at a specific hour often correlates with high errors in adjacent hours, not necessarily at the same level but still significant.
Clustering methods are being used to group 288 month-hour combinations to optimize energy data analysis.
The purpose of clustering is to identify similarities between data points based on selected features like net load ramp, solar load level, and forecast error.
The K-means method is used to determine the ideal number of clusters and characteristics of each cluster.
Clustering is algorithm-driven and uses methods like the elbow method to define clusters instead of manual grouping.
Different features and a variety of clustering results help assess operational importance and potential improvements in grid reliability.
A suggestion was made to document the lessons and iterations in the clustering process to improve understanding and transparency.
Discussion around clustering risks, especially in transition months like May, where conditions vary significantly within the month.
Consideration of dynamic approaches to adjust reserves based on more proximal market days.
Potential value in breaking months into types of days (e.g., hot vs. cold) to improve clustering accuracy.
Highlighting the necessity of understanding clustering biases and continuously refining the methodology.
Emphasis on sharing the thought process and failed ideas to reduce redundancy in future considerations.
Appreciation of efforts to switch methodologies to enhance grid reliability was expressed, acknowledging the complex nature of the task.
Introduction of a new section focusing on risk reducers.
Discussion about available capacities, both online and offline, and how these can act as risk reducers.
Introduction to formulae assessing the output and ramping capability over different timeframes.
Feedback and considerations requested on the process currently in place.
Query by Andrew Reimers from IMM regarding sources of outage data and availability information, with discussion around telemetry versus outage scheduler.
Inquiry by Shams Siddiqi regarding the treatment of historical online headroom and ancillary services.
Discourse on the cyclic effect between observed headroom and procured reserves, addressing how reserves and real-time commitments impact reliability.
Explanation of the analysis not being an Loss of Load Probability (LOLP) study, focusing instead on managing commitment risk due to uncertainty.
Clarification on criteria determination for reserves, assessing reserve sufficiency against generated risk.
Sensitivity tests and adjustments for reliability-focused reserve setting methodologies discussed.
Introduction of probability curves for likely reserve procurement to meet stress hours and criteria.
Open feedback sought on further considerations, such as battery energy management influences.