View Single Post
  #8  
Old 08-08-2021, 02:47 PM
Rick-Rarecards Rick-Rarecards is offline
member
 
Join Date: Aug 2021
Posts: 9
Default

No problem, it's fun seeing all the application AI/ML can have. Snowman is right its a very long discussion but I will try to give you a 30,000 ft view. You can create 1-3, but they would be very limited. There are technological limitations as well as practical limitations.

The easiest to understand are the practical limitations. So yes, if you can't explain the results the tools are useless. How crazy would the industry be if you received the following letter: "Dear Sir/Madam, our software has determined that your card has a 51% chance of likely being fake. Therefore, we are unable to certify it thank you for using our services."


The reason we can't explain the results are a technical limitation. Current AI/ML is a "blackbox" approach. You have an algorithm and train it on examples. Let's say I was creating an AI/ML tool to do 1) detect if a card is real or not. You basically show the tool a bunch of labeled examples so fake and real cards. It creates its own internal method to determine if a card is fake or real. You then test it on a bunch of cards that it has never seen before and compare its results to graders. If it does a good job you are good to go!

So where do the issues come from? Well if the algorithm has never seen a certain color, or a certain name before, never seen a type of error, there is a weird fleck of dust etc. Characteristics of cards that never existed in the training set (have you seen those cards that had a piece of fabric on them). So, you say well if it encounters something its never seen before it should tell someone to inspect the card! Well, that is an even more complicated problem (anomaly detection). Plus, it can't tell anyone what it didn't understand about the card that broke it (explainable AI). You might even say, well let's jus show it everything that has ever been graded before. This might cause something called overfitting, your algorithm is so fine tuned and specific that it will throw out anything not in its training.

It gets complicated the more you think about it. So this is essentially one of many problems just for the arguably easiest of the 3 problems.





There is no easy checklist to go through for grading a card. Just like with a human grader, you need to have the tool see a bunch of cards So you would say here is an image of a fake card long as you need to explain how you got your results AI/ML won't work.
Reply With Quote