![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
|
|
#1
|
||||
|
||||
![]() Quote:
__________________
Net 54-- the discussion board where people resent discussions. ![]() My avatar is a sketch by my son who is an art school graduate. Some of his sketches and paintings are at https://www.jamesspaethartwork.com/ |
#2
|
|||
|
|||
![]() Quote:
There could be a whole sea of issues. I think of three questions AI/ML could help with 1) Detect if a card is real or fake 2) Classify the card (type, year, etc) 3) Classify the grade In all of these cases, I can assure you people want to know why the algorithm gave the grade/class/ etc, e.g. explain how the algorithm got the result. This requires explainable AI, which is beyond what algorithms can do today. Furthermore, all of this requires a large training set (you need a lot of examples) including fake examples! Who has that many training examples sitting around? |
#3
|
||||
|
||||
![]() Quote:
__________________
Net 54-- the discussion board where people resent discussions. ![]() My avatar is a sketch by my son who is an art school graduate. Some of his sketches and paintings are at https://www.jamesspaethartwork.com/ Last edited by Peter_Spaeth; 08-08-2021 at 02:12 PM. |
#4
|
|||
|
|||
![]()
No problem, it's fun seeing all the application AI/ML can have. Snowman is right its a very long discussion but I will try to give you a 30,000 ft view. You can create 1-3, but they would be very limited. There are technological limitations as well as practical limitations.
The easiest to understand are the practical limitations. So yes, if you can't explain the results the tools are useless. How crazy would the industry be if you received the following letter: "Dear Sir/Madam, our software has determined that your card has a 51% chance of likely being fake. Therefore, we are unable to certify it thank you for using our services." The reason we can't explain the results are a technical limitation. Current AI/ML is a "blackbox" approach. You have an algorithm and train it on examples. Let's say I was creating an AI/ML tool to do 1) detect if a card is real or not. You basically show the tool a bunch of labeled examples so fake and real cards. It creates its own internal method to determine if a card is fake or real. You then test it on a bunch of cards that it has never seen before and compare its results to graders. If it does a good job you are good to go! So where do the issues come from? Well if the algorithm has never seen a certain color, or a certain name before, never seen a type of error, there is a weird fleck of dust etc. Characteristics of cards that never existed in the training set (have you seen those cards that had a piece of fabric on them). So, you say well if it encounters something its never seen before it should tell someone to inspect the card! Well, that is an even more complicated problem (anomaly detection). Plus, it can't tell anyone what it didn't understand about the card that broke it (explainable AI). You might even say, well let's jus show it everything that has ever been graded before. This might cause something called overfitting, your algorithm is so fine tuned and specific that it will throw out anything not in its training. It gets complicated the more you think about it. So this is essentially one of many problems just for the arguably easiest of the 3 problems. There is no easy checklist to go through for grading a card. Just like with a human grader, you need to have the tool see a bunch of cards So you would say here is an image of a fake card long as you need to explain how you got your results AI/ML won't work. |
#5
|
||||
|
||||
![]()
Thank you for explaining. That makes a lot of sense to me. Sounds like it's not ready for prime time.
__________________
Net 54-- the discussion board where people resent discussions. ![]() My avatar is a sketch by my son who is an art school graduate. Some of his sketches and paintings are at https://www.jamesspaethartwork.com/ |
#6
|
|||
|
|||
![]()
At the end of the day it’s just humans looking at cards. Wait, I’ve heard that before somewhere….
|
#7
|
|||
|
|||
![]() Quote:
Assuming so, the obvious benefits would be that once the AI has had the opportunity to see enough examples of a particular card issue to be able to discern fakes and alterations, and accurately determine condition, you now have an experienced grader that won't ever quit, can work 24/7, will have no bias, can work faster than any human most likely without needing coffee or bathroom breaks, and it won't matter what side of the bed they got up on that morning. The disadvantages would include that until the AI has had the opportunity to view enough examples of all possible conditions/issues of a specific card issue, it is still going to be in a learning mode and can possibly make mistakes, you are still going to need human input at least initially to be able to tell the AI what it is looking at and what it is/means when it encounters something it hadn't come across before, you'll need to be able to give the AI enough different examples to view, it will likely take a lot more time and human involvement to get properly established than one would think, and it may not always be cost effective if you are looking at some of the more obscure issues. Especially when talking about vintage cards, there may not be enough examples of a specific issue out there for the AI to ever get up to being at an acceptable level of recognition. There will need to always be human knowledge and input in such cases. And for other card issues that do have significant numbers of such cards out there, you'll still have to wait until the AI has had the chance to go through enough of them for it to be able to render consistently accurate results, and I would have to assume this would include every brand new card issue that comes out. Until the AI has had a chance to go through enough examples of a newly issued modern card, wouldn't those initial reviews all be subject to possible errors and innaccuracies, and therefore require more human interaction and review? To me, it would seem the TPG going to use such an AI would want to get all the bugs out of the system before actually launching its use, or at least as many as humanly possible. The TPG likely doesn't have a sufficient number of examples of every card issue out there, not to mention the new issues coming out every day, to do such testing all at once though. So does that mean they use the submissions they receive from customers to provide the sufficient number of examples for the AI to be brought up to speed? And if so, how long could that take before they get enough examples for every issue they grade to be okay to use the AI on? And do they do this testing for AI purposes along with their current human grading till they think the AI is ready, or do they just totally switch to the AI grading and have it closely watched and monitored by human graders till they think it okay to function alone. Either way, it will be interesting. |
![]() |
|
|
![]() |
||||
Thread | Thread Starter | Forum | Replies | Last Post |
sgc employee/representative | thehoodedcoder | Net54baseball Vintage (WWII & Older) Baseball Cards & New Member Introductions | 13 | 11-04-2013 07:49 AM |
Semi OT: Dale Murphy doing reddit "Ask me anything" now | Sean1125 | Net54baseball Sports (Primarily) Vintage Memorabilia Forum incl. Game Used | 2 | 12-25-2012 10:10 AM |
Rogue REA employee on eBay? | DJR | Net54baseball Vintage (WWII & Older) Baseball Cards & New Member Introductions | 118 | 03-09-2010 06:28 PM |
Interesting. A Coach's corner employee? | GrayGhost | Net54baseball Sports (Primarily) Vintage Memorabilia Forum incl. Game Used | 12 | 11-14-2009 06:38 AM |
SGC and the Grading Company Employee | T206Collector | Net54baseball Vintage (WWII & Older) Baseball Cards & New Member Introductions | 11 | 07-06-2009 02:04 PM |