admin shit themselves maybe
Flag: | Finland |
Registered: | March 7, 2021 |
Last post: | September 8, 2022 at 2:28 PM |
Posts: | 2141 |
zeek having a tenz moment and picking yoru... lets go
arch is in the milkshake aim club? nice
they are there already
It's so much better this way. You want everyone to be selected in the last event, because it is temporally closest to Masters 3. You don't want to get some decent teams, that accumulate points throughout two challengers and then get to masters just based on that.
It adds complexity to a system, that is already fairly complex. Two simple double elim qualifiers into a final qualifier with double elim is just the cleanest way, especially in a game where the mechanics and fundamentals are constantly changing and teams need to adapt throughout challengers. You want the teams that have adapted until the challengers finals to qualify if they win.
Irregardless of system or metric the comparison of prediction accuracy should only be limited to those who predict on the same games. This is because when you start comparing people who did different predictions, they are working on not only different amounts of information, but also some predictions are just easier to make based on the teams participating and their skill difference.
On the weighing idea, what you're proposing would be the opposite of the things you'd want to weigh. The deeper you are into a tournament, generally the easier it becomes to predict the games, because you have more data to make any given prediction on. Predicting is inherently an information game. To compare different predictors, you need to make sure, that those predictions were made while the predictors were working under the same amount of information.
Brier scoring already "weighs" predictions with how confident you are in them, so you don't even have to worry about that aspect. The only thing that matters is the underlying uncertainty based on the information you have and whether the predictions were made on the same events.
Let's say there's a really close matchup, that's about 40/100 for team 1. Your goal as a predictor would be not only to get the correct prediction, but also predict how much uncertainty there is given the information(yes you're rewarded for predicting uncertainty correctly too, but predicting uncertainty consistently gives you a lower resolution score). With more available information, the uncertainty naturally decreases.
A brier score of 0 means you've made perfectly confident correct predictions, a Brier score of 1 means you've made perfectly confident incorrect predictions. A Brier score of 0.5 would be achieved by choosing the correct predictions 50% of the time with 100% confidence. So if you predict, that team 1 has a 40% probability of winning, and they win, the Brier score of the prediction is (0.4-1)^2=0.36, but if they don't win, the Brier score for the prediction is (0.4-0)^2=0.16. So you are not penalized too hard for getting uncertain predictions wrong or right. Generally a score under 0.25 is good, because guessing uncertainty consistently would net you such a score(a prediction with 50% confidence will produce a score of 0.25, irregardless of the outcome).
tl;dr: Predictors have to be compared in the same situations no matter the system, because fundamentally the art of prediction is about the amount of information any given predictor has to make inferences. In some situations there is more information available, therefore the predictions are easier to make.
───────────────███───────────────
─────────────██▀─▀██─────────────
───────────██▀─────▀██───────────
─────────██▀──▄▄▄▄▄──▀██─────────
───────██▀──▄▀─────▀▄──▀██───────
─────██▀──▄▀───███───▀▄──▀██─────
───██▀────▀▄───▀▀▀───▄▀────▀██───
─██▀────────▀▄▄▄▄▄▄▄▀────────▀██─
█▀─────────────────────────────▀█
█████████████████████████████████
User-polled predicts on individual matches is exactly what I've wanted for a while now. For a prediction leaderboard I would suggest using a mean squared error(Brier), it's very simple to calculate and provides a nice metric for an individuals or aggregates' predictions over time.
The problem with a prediction leaderboard of course is, that not every prediction is created equal. Only those, that predict the same games can be compared with such metrics. So it would have to be per tournament and only inclusive to those, who predict on the same games(or just all games in that tournament for simplicity).
Cool idea, but probably too big of a pain to implement, I imagine.
What does "backed up by statistics and not 1head takes" mean? Did you utilize some sort of quantitative measure to order this list?
The "org effect" really is just the fact that they are good enough to be picked up. The reason they get picked up is because they probably were performing well in scrims and the org made a cost benefit analysis and decided to pick them up. Since they get picked up, that decision indicates to the outside world, that they have potential.
u a real one
I haven't seen V1 flairs on this site for a while. Where did they go?
Nope, it doesn't seem to have equal point distribution. It's fairly ambiguous as to how it all works. I could adjust for them, but I can't really be arsed to do so.
Yeah, people will in general tend to pick for the teams, that are based on history and other factors more likely to be at the top of the placement. So when we observe the result distribution of the bracket, it's going to be skewed to the top, more than if you were picking by random chance.
I don't know if that's the case with these VLR pickems though, because they use a point scoring system where there's a varying amount of points given for different matches. I wish they gave just the wins so we could compare the distribution to a random one.
I think it's mathematically equivalent to get all wrong as all right(correct me if I'm wrong). So assuming you were going to get them all perfectly, you could just swap the teams, that you were going to select and you've got your perfectly wrong pick'em. There seems no conditional reason for as to why that would not be the case.
All time: Koan - The island of Deceased Ships
This year: voljum - even roses have thorns
Today: phonon - de flore sonos
C'mon Fnatic you got this. One more BO3 and you're in.
This game is just fire. The first super tough opponent this new Acend roster faces. My family jewels are on Acend, but if FPX are on form they can dominate.
I don't have anything to rebut thankfully. It's just very entertaining to me to see how much nonsense you'll be able to generate, if I keep querying you.
Maybe you should do a little reading on the efficient market hypothesis? If you think that is bunk too, you can surely price in your predictions in the markets and outperform them.
When you get kicked by the bookmaker, let me know!
The brier score of the individual outcome is 0.264833622, so it's still tracking quite well. My dead grandma could've predicted fnatic to win with 100% confidence and gotten a brier score of 0, but that doesn't mean she's a skilled predictor, could've just got a coinflip right :). So we compare multiple outcomes and the confidence to get the mean and figure out the relative strength of the predictors.
If you want, you can give me your probabilities(of the favorite winning) for these next upcoming matches so I can compute your brier scores too and compare to the prediction market performance.
Well, Liquid had odds of 1.8 and Fnatic had odds of 1.9 pre-match, so actually it was a very strong prediction considering how close the actual game was!
Ouch.
Point to me where I said the server crashing is the observers fault.
I never said that was the case. The observing quality has just been HORRIBLE, easily the worst I've seen. On top of that weird UI bugs, random times where the feed cuts out, it's just super bad overall.
This series is a great example of how poor observing/production can totally ruin the enjoyment of watching the game.
Finally Doma gets to play Phoenix in official. Lets go.
Lets get this shit to 1k
Yeah, rather have a strong skye(magnum) and weak breach(derke), instead of a decent breach(magnum) and a weak skye(derke) right now. I just think they are playing it for the long term, I guess derke needs to learn skye for other maps too(?), if they run magnum on breach and get him up to speed.
I don't see a theoretical problem with the comp, the only problem is that a some of these guys are playing agents they don't usually play and they are not playing as well as the players, that usually play those agents on their team(like magnum skye>derke skye).
Your probability assessment is wrong.
There are also many maps of him playing with Reyna(44). The original post was saying his Reyna is better, than his Jett. He never brought up Breeze.
If you look at the aggregate of his stats on both of those agents, his Reyna is clearly better in terms of metrics. So is he wrong?
So you're only basing his performance on that series? Isn't that the thing you're supposed to avoid :D
That's what I said, he has only played Jett for the past 6 maps. Your implication was that people are basing their judgement on his Jett based on 1 bad game when there are 6 games available.. So how do you know people are just basing it on this map? Are his other games so much more impressive?
he played only jett for 6 maps in past 30 days tho
Mans playing the nerfed jett vs cypher... It's just painful.
with this pace the match may never be played
Yes, I know that. They recently integrated cloud into WP, creating a newly integrated team, NP(maybe iteration is a better word). This backbone has not historically performed well, but the new iteration seems to be doing well.
You have to realize the standard proposed isn't being at the top of the CIS scene; it's being on top of the entire scene. So a scoreline of 13-11 13-11 against a newly integrated team which used to perform poorly(albeit performs well now) does not instill confidence into me. Therefore I would prefer to wait until they face competition at the EMEA level to even begin to consider such a proposition.
If Riot keeps making huge patch changes in the future like they have in recent times, I find it hard to believe that any team, except teams that survive those transitions on top(like SEN) will be able to dominate in the long term. We'll see once they meet teams more their caliber in the EMEA playoffs on this new patch, potentially with the new agent.
As for now, they look good, but they were a little shaky vs No Pressure.
for humanity yes, but not for riot
Delete league of legends
Edit: I read smart things instead of stupid things KEKW
I think he means the others. They are not well known, but they were still pro in c->b tier.
no liquid all are, fnatic have 2 former cs pros in derke and boaster
U mean 5 cs players>2 cs players?
top 10 hottest pro players
top 10 hottest agents
top 50 voice lines
etc..
The TSM of Europe. But hey at least G2 get the dubs.
spanish speaking audience finally found a team to rally behind