How IsItRenewed Works
Every show on IsItRenewed carries a renewal verdict produced by an AI model. This page explains, honestly and in plain language, what that verdict means, what evidence goes into it, and where its limits lie.
What a verdict is
For each show we publish two things: a renewal probability from 0 to 100 percent, and a short verdict label that puts that number into words. Together they answer one question — how likely is this show to come back for another season?
It is important to be clear about what this is. A verdict is a prediction, not an official announcement. When a network has not yet said anything, our model is making an informed estimate from the evidence that exists publicly. It is not relaying a decision that has already been made behind closed doors. The one exception is when the outcome is genuinely settled — which is the first signal we look at.
The signals we weigh
The model starts with the facts that override everything else. If a show has a confirmed, future-dated next season, it is marked renewed at 100 percent — a network does not put a season on the schedule that it has not ordered. In the same way, shows that are officially listed as ended or cancelled are scored to match that reality. There is no guessing when the answer is already known.
When a show's fate is still open, the model weighs a range of evidence to estimate the probability:
- Official production status — whether the show is currently in production, between seasons, on hiatus, or wrapped.
- Renewal and cancellation news — genuine renewal-status reporting, such as a show being officially renewed, cancelled or announced as ending, is weighed heavily. Routine hype — trailers, casting announcements and promotional coverage — is treated as low-weight background noise, because it says little about whether another season is coming.
- Episode-rating trajectory — how audience ratings move across and between seasons. Ratings that fall over time are a negative signal for renewal.
- Audience and critic ratings — scores from IMDb, Rotten Tomatoes and Metacritic, which capture how a show has been received overall.
- Search-interest trend — whether public attention in the show is rising, holding steady or fading.
- Time since the last season — a long gap with no news is a different situation from a recent finale, and the model accounts for that.
- Network patterns — typical renewal behaviour for the network or streamer that carries the show, since different platforms make these calls differently.
The verdict scale and confidence
The probability is summarised with one of six verdict labels, running from a confirmed return to a confirmed conclusion:
- Renewed — another season is confirmed.
- Likely renewed — the evidence points strongly toward a return.
- On the bubble — the outcome is genuinely uncertain and could go either way.
- Likely cancelled — the evidence points toward the show not continuing.
- Cancelled — the show has been cancelled.
- Ended — the show concluded as planned.
Alongside the verdict we publish a confidence level — low, medium or high. Confidence reflects how much agreeing evidence the model had to work with. When many credible sources line up and the data tells a consistent story, confidence is high. When the data is sparse or the signals conflict, confidence is low, and the verdict should be read as a tentative estimate rather than a firm call.
Always up to date
A renewal verdict is not set once and left alone. Each show is re-evaluated continuously as news breaks and the underlying data changes, so the answer you see reflects the current picture rather than a snapshot from months ago. Every show page shows when its verdict was last updated, along with a history of how the probability has moved over time — so you can see not just where a show stands, but which way it has been trending.
Limitations and honesty
We think a credible predictor has to be upfront about what it cannot do. Predictions can be wrong, and sometimes are — a network can make a call that the public evidence did not point to. Shows with very little data, such as those with few ratings or no news coverage, will get low-confidence verdicts, because there simply is not enough to go on. And the model has no inside information: it does not speak for any network, studio or streamer, and it has no access to decisions that have not been made public.
The honest test of any predictor is its record. We log every prediction against the real outcome once it is known — you can see our public accuracy log.