Quote:
Originally Posted by Rick-Rarecards
No problem, it's fun seeing all the application AI/ML can have. Snowman is right its a very long discussion but I will try to give you a 30,000 ft view. You can create 1-3, but they would be very limited. There are technological limitations as well as practical limitations.
The easiest to understand are the practical limitations. So yes, if you can't explain the results the tools are useless. How crazy would the industry be if you received the following letter: "Dear Sir/Madam, our software has determined that your card has a 51% chance of likely being fake. Therefore, we are unable to certify it thank you for using our services."
The reason we can't explain the results are a technical limitation. Current AI/ML is a "blackbox" approach. You have an algorithm and train it on examples. Let's say I was creating an AI/ML tool to do 1) detect if a card is real or not. You basically show the tool a bunch of labeled examples so fake and real cards. It creates its own internal method to determine if a card is fake or real. You then test it on a bunch of cards that it has never seen before and compare its results to graders. If it does a good job you are good to go!
So where do the issues come from? Well if the algorithm has never seen a certain color, or a certain name before, never seen a type of error, there is a weird fleck of dust etc. Characteristics of cards that never existed in the training set (have you seen those cards that had a piece of fabric on them). So, you say well if it encounters something its never seen before it should tell someone to inspect the card! Well, that is an even more complicated problem (anomaly detection). Plus, it can't tell anyone what it didn't understand about the card that broke it (explainable AI). You might even say, well let's jus show it everything that has ever been graded before. This might cause something called overfitting, your algorithm is so fine tuned and specific that it will throw out anything not in its training.
It gets complicated the more you think about it. So this is essentially one of many problems just for the arguably easiest of the 3 problems.
There is no easy checklist to go through for grading a card. Just like with a human grader, you need to have the tool see a bunch of cards So you would say here is an image of a fake card long as you need to explain how you got your results AI/ML won't work.
|
So in more of a layman's terms, you think of the AI as like a new human grader who starts off not knowing much, but as they continue to see more and more examples of a specific card issue they get better and better at defining the exact condition of a card from that specific card issue and noting anomolies that would possibly indicate it may have been altered. That sound about right?
Assuming so, the obvious benefits would be that once the AI has had the opportunity to see enough examples of a particular card issue to be able to discern fakes and alterations, and accurately determine condition, you now have an experienced grader that won't ever quit, can work 24/7, will have no bias, can work faster than any human most likely without needing coffee or bathroom breaks, and it won't matter what side of the bed they got up on that morning.
The disadvantages would include that until the AI has had the opportunity to view enough examples of all possible conditions/issues of a specific card issue, it is still going to be in a learning mode and can possibly make mistakes, you are still going to need human input at least initially to be able to tell the AI what it is looking at and what it is/means when it encounters something it hadn't come across before, you'll need to be able to give the AI enough different examples to view, it will likely take a lot more time and human involvement to get properly established than one would think, and it may not always be cost effective if you are looking at some of the more obscure issues.
Especially when talking about vintage cards, there may not be enough examples of a specific issue out there for the AI to ever get up to being at an acceptable level of recognition. There will need to always be human knowledge and input in such cases. And for other card issues that do have significant numbers of such cards out there, you'll still have to wait until the AI has had the chance to go through enough of them for it to be able to render consistently accurate results, and I would have to assume this would include every brand new card issue that comes out. Until the AI has had a chance to go through enough examples of a newly issued modern card, wouldn't those initial reviews all be subject to possible errors and innaccuracies, and therefore require more human interaction and review?
To me, it would seem the TPG going to use such an AI would want to get all the bugs out of the system before actually launching its use, or at least as many as humanly possible. The TPG likely doesn't have a sufficient number of examples of every card issue out there, not to mention the new issues coming out every day, to do such testing all at once though. So does that mean they use the submissions they receive from customers to provide the sufficient number of examples for the AI to be brought up to speed? And if so, how long could that take before they get enough examples for every issue they grade to be okay to use the AI on? And do they do this testing for AI purposes along with their current human grading till they think the AI is ready, or do they just totally switch to the AI grading and have it closely watched and monitored by human graders till they think it okay to function alone. Either way, it will be interesting.