PDA

View Full Version : Automated Card Grading - Surface


kevinlenane
09-11-2020, 03:44 AM
417867

For anyone following along with my efforts to build a machine vision based grading, authentication and provenance company - we are currently crossing what i think is probably the last milestone before we can demo a real proof of concept. Grading corners, edges and centering is pretty straightforward - its the surface grades that wind up requiring the most training data and are the most challenging for a variety of reasons - mostly relating to how to create a perfect "mask" to compare all other "submitted"/scanned cards to.

We are pretty close to finishing a generic solution for this piece but we also checked another box in that in this process - by detecting that there is ample data to fingerprint a card image to mark and measure it for any alteration. When the alteration news really hit the forums - folks were utilizing a few obvious print marks and stains but the reality that is visible to pixel analysis is much bigger - once we figured out how to normalize what that perfect mask looked like - each and every card quite literally had a finger print. Anyway, this alone would be super useful to any grading effort - ive attached an image of an actual card and its departures from the "perfect version" - just thought folks here might find this extremely reliable feature interesting as guard against trimming and other alteration including any color changes which the machine sees as changes in pixel color.

kevinlenane
09-11-2020, 03:51 AM
I should also note - that the actual surface grade is basically learned behavior and is a somewhat proportional measure of how far the card diverges from the perfect version and then also the extent of each diversion. And even a 10 or 100% will have significant diversions/anomalies.

swarmee
09-11-2020, 04:36 AM
We touched on this a little in the CSG grading thread.

https://www.net54baseball.com/showpost.php?p=2016289&postcount=30

toledo_mudhen
09-11-2020, 04:38 AM
Dam Kevin - I had been thinkin that I was a fairly innovative "IT" guy.

You're on a whole different level than the rest of us nerds - KUDOS bud!

chriskim
09-11-2020, 05:32 AM
Come on! Pls don't bring AI to our hobby. We all love our beat up T cards, using AI grading them, they will all come back as 1s.

kevinlenane
09-11-2020, 07:18 AM
swarmee -

The machine vision is capable of detecting the set, year and card and then can adjust its surface parameters and level of acceptance accordingly. This enables a more accurate and consistent score and also enables better fingerprinting though this isnt necessarily required for that unique identification of the cards given how unique each card is.

Oddly enough normalizing the perspective from the many possible/variable camera angles proved to be almost as difficult a solution vs. working out this surface adjustment.

Gorditadogg
09-11-2020, 07:51 AM
Way cool. Looking forward to seeing you on Shark Tank.

Sent from my SM-G955U using Tapatalk

tazdmb
09-11-2020, 07:55 AM
I want to be an Angel investor!

Leon
09-11-2020, 08:28 AM
Very interesting technology. I believe there are several companies ramping this type (not exactly) of technology up at the moment. If I were a current TPG I wouldn't rest on my laurels....There could be a new mousetrap soon.

kevinlenane
09-11-2020, 08:51 AM
417880

Here is the perspective normalization I was referencing for those who are interested.... basically this ensures that we have the right angle on the card to evaluate appropriately. Otherwise minor changes in camera angle would produce dramatically unacceptable grades...

Leon who else is working on this? Would love to compare notes :)

kevinlenane
09-11-2020, 08:52 AM
I also want to point out this is the first thread devoted entirely to Larry Bowa imagery.

ullmandds
09-11-2020, 08:57 AM
very cool! I think a technology that "learned" more about each issue the more cards were graded in this manner would be incredible! over time anomalies on certain cards equating to alterations could be detected and corrected over time.

kevinlenane
09-11-2020, 09:04 AM
Yes thats exactly how neural networks are intended to work - the more we see common anomalies the better we are at recognizing them - and it will happen overall and set by set. Initially we'll have it working for a few sets but we wont have to feed it training data for all of them really only for representative and unique sets. So one or two tobacco sizes - a representative set for circle cards etc. Then over time the system as a whole gets better as does the relative ability to be accurate within each set. Authenticity is actually more era specific where grading tends to be more generic with any major differences having to do with how shiny modern cards are and even this only impacts the surface grading dynamics to account for, say, a smudge, etc...

luciobar1980
09-11-2020, 03:53 PM
Hmm, how would this work for a surface impression that isn't really visible from straight-on?

JollyElm
09-11-2020, 04:35 PM
I gotta say, assessing the surface area is a tricky business. Whenever I'm really closely looking at my cards to decide what to send in via Bobby's group subs, time and again I find weird little 'impressions' (for lack of a better word) here and there that confuse the issue. They will completely and utterly seem to be naturally occurring imperfections of the printing/glossing process...in other words, they've been there from the start...but it makes me think that these tiny (hard to describe) things might affect the grade. I'm not really explaining it well, but I'm sure many people here know exactly what I am trying to say.

My question is, would your technology be able to 'know' and detect the differences between, say, an indentation caused by a kid writing on a piece of paper on top of a card, and a naturally occurring gloss anomaly from the printing process? Tough question, I know. Thanks.

steve B
09-11-2020, 10:20 PM
I'd like more details on what's considered to be a perfect surface. Before about 1992 the cardstock itself had not exactly texture, but the stock wasn't perfectly flat. I can't see downgrading something for the way the paper was originally made.

How would this handle something like 93 Upper deck, where a portion of the set has three different ways the gloss on the back was applied.
Gloss only on the picture
Gloss on the picture with an added gloss layer over the entire back
Overall gloss.

Foil stamping can be impressed more or less deeply on any foiled card. How is that measured from a mostly straight on photo? And what tolerances are used? Like when is it too deep or shallow so it's a point against?

kevinlenane
09-12-2020, 03:04 AM
All good questions - so the severity of surfaced defects are basically all just differences in the pixel color - the resulting "how much they matter" is basically dependent on training data at the beginning and as the system is trained in an ongoing basis. Basically if there is knowlegable training data going in - and then synthetic training data created from there - then all these things get distinguished . Glossy surfaces are definitely the most nuanced but in short - if a defect or other factor changes the pixel colors enough in a card to impact a grade then the logic of how those pixels got disrupted gets built in.

So if the the disruption is meant to be there then the training data should show it.
- if the training data is good. My guess is also that any surface defect or difference would be visible straight ahead in a high res photo - even if it's not visible to the naked eye. Even seemingly straight indents or stamps are never actually 100% straight. In general surface and particularly glossy surfaces will be a bit more nuanced - but in machine vision the accuracy is really dependent on your training data and how you amplify it.

swarmee
09-12-2020, 08:36 AM
Since you're here discussing it, I have some other questions:
1) Are you looking to create your own grading company or licensing your process?
2) Is your service looking into autographed cards, in terms of whether the autograph damages the card in some way: surface impressions, mislaid sticker autos (bubbling), etc.
3) Once determining centering, how are you determining how much to debit the overall score based on centering. As we know, Beckett has the most stringent view on centering based on their grading scale, with PSA being more lenient. SGC, which used to be more lenient, has gotten much stricter recently.
4) Are you assigning score/rubrics to subgrades, and are you focused on the standard 4 subgrades (corners, edges, surface, centering) or have you determined there should be additional breakdowns of the subgrades into more categories?
5) Are you going to allow licensees (if applicable) to change your weighting scales to match their current assessments or written scoring rubrics?
6) Are you using the system to determine counterfeit stamps like Desert Shield or serial numbering size/font/placement? 1991 Topps would actually be a really interesting set to train your system on, since there are so many different back variations, recurring print defects, errors/corrections, Desert Shield stamped cards, ultraviolet backs, etc.
7) Can your imagery pick up "hidden" text like the Nolan Ryan Pacific cards or the Upper Deck X-Men Professor Xavier variants?
8) How are the following issues decremented: smeared ink print defects, production nicks, tilt, scratches, factory miscut edges, factory rough cuts, missing/lighter ink application, etc?

I think you're doing some neat things and will actually discover a bunch of unknown recurring print defects developing this process.

I have some pack fresh cards that I've gotten over the years from Vintage Breaks that I would send as a donation if it would help your cause. Or maybe you could hook up with the 1980s "all PSA 10s" guy on the Collector's Universe webpage to acquire a bunch of his stock for scanning a bunch of cards in order to train your system better. https://forums.collectors.com/discussion/995191/i-love-the-1980s-the-ultimate-unopened-rip-quest-to-build-topps-fleer-donruss-psa-10-sets

steve B
09-12-2020, 10:10 PM
Sounds like an interesting approach.

To the human eye, the inking levels of overlapping halftone dots can change the perceived color. slight registration problems can do the same thing.

It's interesting in many ways with your description of how surface defects are identified.
I take it you can pick up the depth of a stamping from how the pixel colors change around the edge of the stamping where the cardstock is curved. It will be curved for a larger distance if the stamping is deeper.

The gloss differences on 93 UD are tricky in anything but raking light.

I can think of a lot of other issues on modern cards that could be challenging at first.
(81 star stickers with black printed under a lighter blue on the front. Many 70's-80's sets that have both light and dark ink on the back. 88 score where both fronts and backs are screened differently, and have different die cuts so the corners are different.

Plus a few where the computer might be a huge help.
81 Topps low contrast backs (I have a bunch set aside, but I'm on the fence about them being truly different and whether the cause is different plate exposure, plate wear, or something else entirely.
Most Topps A+G and Gypsy Queen, I have many set aside as slightly tinted backs, compared to pure white backs. It's very subtle, and I have trouble picking them out if the lighting isn't right.

Hundreds of cards in each set from the 70's to 91 have recurring semi random spots. I have a bunch of these in my 81Topps list, and with only about 15-20000 cards looked through I'm sure I missed at least as many as I found. Cataloging them all so they aren't counted as surface defects would be amazing, but probably a pretty serious challenge.

I'd really love to know more about just how all this works. (But 1- I probably wouldn't understand some of it especially any math 2- I figure the exact details are probably being kept secret. ) My wife is a software developer, and we've talked about what would be easy and what would be hard a few times. I've often been surprised that some of what I think would be hard is something she thinks is no problem.

hcv123
09-13-2020, 08:12 AM
I am sure it will not be perfect, but a HUGE step toward greater consistency. I would think subgrades and average overall grade would be most suitable based on what I have read here so far. Each collector then gets the option to focus on what is important to them.

kevinlenane
09-13-2020, 09:28 AM
Ill try to answer some of the additional questions

1. Its most definitely a company (Genamint) and I will likely run and capitalize as a tech startup. Im starting with instant card evaluation and once i do that well id expand into other items (collectibles and alternative assets) that lend themselves to peer to peer selling. In the end we want to be the engine that impartially answers the question of condition instantly to better facliitate liquidity.

2. We are providing sub grades for front and back so 4 per front and 4 per back - 8 total per card. Centering, Edges, Corners and Surface. Id actually love to hear if anyone has any other requests or subdivisions (i think subdividing surface could be useful ie. print marks, creases, etc)

3.So on the question of how to treat centering. For now my plan was just to give folks the subgrades as data and then let them make the call on how much it matters. I had planned on giving equal weight to all factors which is just an average of the four grades from 1-10. I would be curious if people would prefer a more detailed unit of measure for things like centering since we could obviously just say its 45/55 and let you decide what you think that is. In my mind more information is better but that may only be for Net54 types. Opinions welcome here too.

4. On the many detailed surface questions - you are both on to something - itis definitely the most nuanced - its really about the training data and how we create the synthetic data here. All sorts of sub-grades underneath surface could be possible even on a set-basis but it just requires inputs to specify what is what. I suspect we'll evaluate surface in a generic way to start with the data building in what counts and what doesnt and in a v2 we'd label it. So most things that should dock a grade, will still dock and vice versa. You may just not get detailed information on it in early versions. There is some balance between scale and detailed info on the grading makeup. In short, these surface nuances WILL be accounted for - but they may not be called out in grading results. Again happy to hear opinions here.

swarmee
09-13-2020, 11:13 AM
Some responses:
1) A straight average is normally a terrible idea for an overall card grade. Think of a card with two moderate creases leading to a 2/10 surface score. Then give it perfect centering, edges, and corners. The Average is 8/10. That's why BGS has a cap on the difference between the lowest subgrade and the overall grade.

2) Determining how to decrement a card grade's based on flaws that PSA calls qualifiers. Are you going to take the surface grade all the way down to a 2/10 based on writing or are you going to give it a 10 with a qualifier of sorts? Consistency in identifying out-of-registration cards and deciding on the proper level of point loss is also important.

3) You probably want to focus group various flaws to respected collectors *before* announcing your scoring algorithms. Continually tuning them, I would think, would actually hurt you in terms of acceptance and trust.

4) Centering could be graded on something like 2.5 point increments. 52.5/47.5 or better is a 10, 55/45 a 9.5, all the way down to miscut getting a 1. Another point could be taken off for slight tilt or 2 for drastic tilt.

<pre>Centering Grade
47.5 52.5 10
45 55 9.5
42.5 57.5 9
40 60 8.5
37.5 62.5 8
35 65 7.5
32.5 67.5 7
30 70 6.5
27.5 72.5 6
25 75 5.5
22.5 77.5 5
20 80 4.5
17.5 82.5 4
15 85 3.5
12.5 87.5 3
10 90 2.5
7.5 92.5 2
5 95 1.5
2.5 97.5 1
0 100 0.5 (Miscut)</pre>

5) How much does/should a pinhole count toward the surface grade? A card with a pinhole may be missing 0.1% of the surface area of the card, while one with rounded corners may be missing 5% of the card area. However a card with rounded corrners is still regularly called VG (3/10) and one with a pinholes is currently capped at GD (2/10).

6) How would you declare a hand-cut card like Post Cereal to be "full"? Would it require the full black border or is part okay?

Some things I regularly see requested: specific centering measurements, specific card sizes, noting of hard to see flaws (pinholes, surface wrinkles), specific alterations on an altered card.

bks14sr
09-15-2020, 02:29 PM
This is a concept I’ve thought of before too. I always wondered why a nice vision system was not at least performing some of the assessed grading.

My question is, how do you “set the scale” for what’s considered a perfect card for each manufacturer, yr, etc? Asking, as some variants are not so easy to find perfection.

kevinlenane
09-16-2020, 08:25 AM
I think the centering scale suggestions are great ones - w regards to hand cuts etc. it's all a matter of training data so once we detect card type the app can do things that are specifically appropriate for that set. I picked some more mainstream sets to start to get the perfect masks more easily but even in those cases we only need a few examples of some of the cards to basically assemble a perfect version artificially. It's a little hard to explain over the forum - but it amounts to basically using ML to create a flawless version of a card based on variations between a different card that's been graded a few times.

steve B
09-17-2020, 09:05 AM
What is ML?

swarmee
09-17-2020, 09:35 AM
What is ML?

Machine Learning.

steve B
09-17-2020, 07:49 PM
Machine Learning.

Totally spaced on that one. I'm not used to some of the acronyms these days.

68Hawk
09-17-2020, 08:45 PM
What's interesting to me is that graders already do much of what is being described in this thread but collectors don't believe that all that criteria has been evaluated and resulted in the end grade.
But tell those same collectors that a computer with zero conflict or skin in the game has arrived at the exact same grade and there will be less argument.
I guess that's progress for some.

Imagine now that the machines become predominant graders of cards, and someone cracks out a card and shows the evaluation was incorrect for flawed data or programming....2 years AFTER first submissions have been taken and millions of graded cards later.
EVERY single card evaluated to that point could be considered a candidate for inaccurate grading because one incorrect application of ML can be extrapolated and cancerous to ALL its evaluations, not simply the vagaries of opinion of one grader to another.
Will all your hobby concerns float away magically with this panacea?
I think not.
But good luck.

hcv123
09-17-2020, 09:43 PM
Some responses:
1) A straight average is normally a terrible idea for an overall card grade. Think of a card with two moderate creases leading to a 2/10 surface score. Then give it perfect centering, edges, and corners. The Average is 8/10. That's why BGS has a cap on the difference between the lowest subgrade and the overall grade.

2) Determining how to decrement a card grade's based on flaws that PSA calls qualifiers. Are you going to take the surface grade all the way down to a 2/10 based on writing or are you going to give it a 10 with a qualifier of sorts? Consistency in identifying out-of-registration cards and deciding on the proper level of point loss is also important.

3) You probably want to focus group various flaws to respected collectors *before* announcing your scoring algorithms. Continually tuning them, I would think, would actually hurt you in terms of acceptance and trust.

4) Centering could be graded on something like 2.5 point increments. 52.5/47.5 or better is a 10, 55/45 a 9.5, all the way down to miscut getting a 1. Another point could be taken off for slight tilt or 2 for drastic tilt.

<pre>Centering Grade
47.5 52.5 10
45 55 9.5
42.5 57.5 9
40 60 8.5
37.5 62.5 8
35 65 7.5
32.5 67.5 7
30 70 6.5
27.5 72.5 6
25 75 5.5
22.5 77.5 5
20 80 4.5
17.5 82.5 4
15 85 3.5
12.5 87.5 3
10 90 2.5
7.5 92.5 2
5 95 1.5
2.5 97.5 1
0 100 0.5 (Miscut)</pre>

5) How much does/should a pinhole count toward the surface grade? A card with a pinhole may be missing 0.1% of the surface area of the card, while one with rounded corners may be missing 5% of the card area. However a card with rounded corrners is still regularly called VG (3/10) and one with a pinholes is currently capped at GD (2/10).

6) How would you declare a hand-cut card like Post Cereal to be "full"? Would it require the full black border or is part okay?

Some things I regularly see requested: specific centering measurements, specific card sizes, noting of hard to see flaws (pinholes, surface wrinkles), specific alterations on an altered card.


I love the idea of noting of hard to see flaws. I would also suggest either separate grades for front and back or a much heavier weight 90-10 or 80-20 at most to the front grade over the back grade.

swarmee
09-18-2020, 05:09 AM
Imagine now that the machines become predominant graders of cards, and someone cracks out a card and shows the evaluation was incorrect for flawed data or programming....2 years AFTER first submissions have been taken and millions of graded cards later.
That's one reason I'm asking so many questions now. If they just start grading without having a finely tuned system, the grades will be inadequate. Consistency is required of a grading company or tool.

toledo_mudhen
09-18-2020, 06:04 AM
That's one reason I'm asking so many questions now. If they just start grading without having a finely tuned system, the grades will be inadequate. Consistency is required of a grading company or tool.

I believe the current "Human" process of grading a card involves the opinions from several (if not many) pairs of eyes before the final grade is actually assigned and I don't anticipate that AI will (or can) completely eliminate the "human"factor.

However - I do believe that this "eGrade" could become an extremely useful tool in expediting the entire process as the intermediate sets of eyes could be completely eliminated where each card is initially assigned an AI grade and then it would need to pass thru a final "human" evaluation process (with just a few sets of highly qualified eyes) before being assigned the final "Certified Grade" and then slabbed and returned to the customer.

I see this as a huge move towards completely eliminating the 6-12 month turnaround times that we currently enjoy.

Best of Luck in your endeavors -

kevinlenane
09-18-2020, 09:37 AM
For sure nothing is totally "perfect" when it comes to grading - the concern about the machine learning viral error idea - isn't exactly how it works. We actually have 3 distinct human inputs and several more classes of machine inputs all providing grades on the same sample cards so in theory over time any nuance gets built in to the best average grade per category. We are planning on having the proof of concept ready to grade several vintage wax breaks live on camera for the Beckett Industry Summit - and ill fire out the test version here if folks want to give feedback on it technically (technically meaning as it relates to the grade evaluation not coding). I want to incorporate as much feedback as possible amongst the group here for the vintage grading in particular as there is a lot of nuance to various set/year pairs that I feel should be included over time. My goal here is to provide an instant service that is transparent provides all the available raw data that make up and produce the various numbers.

swarmee
09-18-2020, 09:43 AM
We are planning on having the proof of concept ready to grade several vintage wax breaks live on camera for the Beckett Industry Summit - and ill fire out the test version here if folks want to give feedback on it technically
I'd definitely be interested in seeing it and providing feedback. I think your demo concept is really interesting, especially since the Beckett Summits hosted a ton of verified trimmers over the years.

Throttlesteer
09-18-2020, 12:34 PM
I think the idea is cool and it certainly will help with spotting alterations and doctoring. But at some point, I find myself shaking my head at the level of technology being applied to assess something far beyond the human eye or even a loupe. I totally understand the monetary concerns with today's "market". But I still struggle internally with applying micron measurements and machine learning to pieces of cardboard. I know, it's my issue.

Good luck with the company!

vintagebaseballcardguy
09-18-2020, 02:30 PM
I think the idea is cool and it certainly will help with spotting alterations and doctoring. But at some point, I find myself shaking my head at the level of technology being applied to assess something far beyond the human eye or even a loupe. I totally understand the monetary concerns with today's "market". But I still struggle internally with applying micron measurements and machine learning to pieces of cardboard. I know, it's my issue.

Good luck with the company!

You are not alone.

vintagetoppsguy
09-18-2020, 03:15 PM
I know this thread is about the card's surface, but I have a question about the card's size. How does it know the difference between a factory and non-factory cut? Some cards are just naturally cut smaller from the factory, so can it distinguish the difference between a card that was cut short at the factory to a card that was trimmed?

swarmee
09-18-2020, 03:47 PM
If you train it properly, then yes, it should be able to determine factory cuts vs. non-factory cuts.
Would be interesting to see if it could be trained to spot fake rough cuts like the 1952 Topps Look-n-See cards were given. But I could definitely see it catching the fake teeth given to the SI4K Tiger Woods RCs that both PSA and BGS missed on high-valued cards.

drcy
09-18-2020, 06:01 PM
But can it make a good cup of coffee?

Bigdaddy
09-18-2020, 07:23 PM
Being an engineer, I really like what you are doing. We know for a fact that dogs can smell things that are below our threshold, and we can train them to alert us to certain substances - drugs, cancers, odors left on clothing, etc. We also know that man made sensors can detect things we cannot see with our unaided eyes or fingers or other receptors, whether that is through greater magnification or sensitivity or bandwidth. We can't see radio waves, but our radios can certainly detect them. Imagine if we only let doctors use their hands and eyes and ears to diagnose our ailments.

The ML approach to grading is the next logical step in finding a way to accurately, consistently and without bias, grade a card based upon a set of known rules. It will take time for the system to learn, and hopefully the vast majority of that takes place in the lab before it is rolled out. Changes to the grading algorithm will happen over time, but that is no different than our current crop of companies using their own grading scale and changing it over time. PSA is known to be tougher in present day than it was on the earlier flips. How many times have I heard about a card in an older flip "That card would never get that grade if it was submitted today"? A pinhole used to automatically downgrade a card to a '1', now it could be a '2'. And we as a community have accepted those random changes.

Personally, I think a big challenge is to come up with an acceptable algorithm that takes into account all the items you can detect and then producing a number (1-10) that represents a measure of the cards 'goodness', or proximity to perfection. In the end, it will have to pass the eye test of hobbyists. I think that's one big downfall of the current companies - the ability to quantify 'eye appeal'.

Bravo young Kevin, for daring to introduce new technology into this hobby. Thank you and I wish you well.

BlueSky
09-18-2020, 07:42 PM
How are you dealing with potential ML bias in the training and synthetic data? Can you describe the accuracy (precision - recall) for various sets/cards?

steve B
09-18-2020, 11:37 PM
For sure nothing is totally "perfect" when it comes to grading - the concern about the machine learning viral error idea - isn't exactly how it works. We actually have 3 distinct human inputs and several more classes of machine inputs all providing grades on the same sample cards so in theory over time any nuance gets built in to the best average grade per category. We are planning on having the proof of concept ready to grade several vintage wax breaks live on camera for the Beckett Industry Summit - and ill fire out the test version here if folks want to give feedback on it technically (technically meaning as it relates to the grade evaluation not coding). I want to incorporate as much feedback as possible amongst the group here for the vintage grading in particular as there is a lot of nuance to various set/year pairs that I feel should be included over time. My goal here is to provide an instant service that is transparent provides all the available raw data that make up and produce the various numbers.

I wouldn't mind giving it a try either. I can run some peculiar stuff past it, and see what it does.

What does it run on?

Case12
09-19-2020, 08:59 AM
I have been consulting a start up that is patenting 3D (multiple image sensors) and movement - in this case, converting unknown sign-language and unknown languages. Bias is a big issue for language. For a fixed example, such as a card, muliple sensor can also be used to answer some of these questions. At the beginning, human verification will be required, with a common/or agreed rules - then the learning with take over this very quickly. I am really, really excited to see this applied to autographs, where often the learning can come from direct feedback of the signer (in some cases). This is awesome...and very doable (no magic...the real deal).

swarmee
09-21-2020, 06:58 AM
https://forums.collectors.com/discussion/1044189/wwii-girly-girl-eye-candy-for-horse-tell-me-why-these-ex-5-and-mint-9-cards-are-4-grades-off
Interesting case study... most likely the 4s and 5s have spider wrinkles or surface creases only seen at specific angles/lighting.

JollyElm
09-21-2020, 03:29 PM
There is another wrinkle to add to the mix. Would this system be able to detect the differences between a card that was actually printed on the lighter side (from a dearth of certain colors of ink at the factory) and one that has spent time in the sun and faded?

Here's an example of what I'm talking about...

419096

The top two look faded, but considering their coloring is identical (both were submitted by me and weren't left out in the sun or anything), they actually came out of the factory that way, so PSA graded them accordingly (Although, perhaps the 6 was graded accurately by one grader, and a different grader hammered the no creases/wrinkles 4 for being 'faded'?? Who knows.).

swarmee
09-21-2020, 05:49 PM
Good question. There are all kinds of print defects or color printing issues that would need to be sussed out and appropriately weighted. I think your example error cards are not print variations, per se, since it looks like the omission of the black around "1969 Rookie Stars" is an intentional change to the black printing plate. All three cards have the black outline around the players and header, very clearly, and black is very tough to fade without damaging the rest of the colors.

T205 GB
09-22-2020, 09:29 AM
Kevin it was great talking to you this morning. I think you are headed in the right direction and hope as things progress we are able to work together in, "making a better mousetrap," as Leon would say.

GasHouseGang
09-22-2020, 09:53 AM
I think that was Ralph Waldo Emerson.:D Although I'm sure Leon has said it too.

Case12
09-22-2020, 05:32 PM
Curious - are you using TensorFlow?

kevinlenane
09-22-2020, 05:45 PM
To answer questions on the surface details - anything that the human eye can detect a machine can as well (w greater resolution on most new cameras) so any edge cases can be accounted for by simply accounting for them with real and synthetic data. It's worth noting that it is expected that there will quite a bit of this kind of give and take from user to company - feedback is going to be welcome and utilized so something like the nuances of Hocus focus or T card edge cuts would require some feedback and then review for accuracy/ethics before it gets accommodated in the grading. Generally you can expect the data to be extremely transparent and eventually explanatory.

kevinlenane
09-22-2020, 05:49 PM
FWIW I haven't seen a surface crease yet that can't be detected at a pixel level - but that would corroborate with numbers from physical card a training data meaning the human can inspect it in anyway to provide the grades associated w the image. In this way - there are no gaps between eyes and lens. The only thing that might get missed in this dual training model is a stain or
mark that only exists on the edge of the card but doesn't show on the front or back when looking at the edges. I've only been able to produce this on extremely thick cards like the modern memorabilia/insert stuff - but in okay not worrying to much about that case right now...

Goudey
10-20-2020, 05:57 PM
417880

Here is the perspective normalization I was referencing for those who are interested.... basically this ensures that we have the right angle on the card to evaluate appropriately. Otherwise minor changes in camera angle would produce dramatically unacceptable grades...

Leon who else is working on this? Would love to compare notes :)

Hi Kevin, question for you: how does your normalization not affect the pixel vectors of the original scan? Given it probably affects the values you have for raw images ( assuming you're not reducing dimensions). Question 2: how do you deal with different size/complexity of scans to "grade"? seems like they could be all different size vectors. Super cool work

bnorth
10-20-2020, 06:43 PM
Great video. I wish you the best and hope to see this being used ASAP to grade cards.

JollyElm
10-20-2020, 07:41 PM
Here's a question. Is your scanning effective if a card is already graded and sitting in a slab? Because it would be really cool if you were able to scan 10, 20, 50, whatever, different graded examples of the very same card where each received the exact same grade (say PSA 7), and were able to tell which ones were 'accurately' (for lack of a better word) graded and which were not. It'd be a helluva interesting undertaking. Of course, you would need a lot of volunteers to send you their cards.

Lobo Aullando
10-20-2020, 08:49 PM
Here's a question. Is your scanning effective if a card is already graded and sitting in a slab? Because it would be really cool if you were able to scan 10, 20, 50, whatever, different graded examples of the very same card where each received the exact same grade (say PSA 7), and were able to tell which ones were 'accurately' (for lack of a better word) graded and which were not. It'd be a helluva interesting undertaking. Of course, you would need a lot of volunteers to send you their cards.

Everybody loves a distribution! The big question is skewed-left or skewed-right.

kevinlenane
11-05-2020, 01:38 PM
Hey all - so we've managed to get the remaining sub-grades implemented and we have it working well in head-to-heads with human graders within an average of .23 their sub-grade averages. Next step is building out additional set support and modern set support before building the API and/or mobile app.

https://www.youtube.com/watch?v=qXSkoFBvcDY

Definitely getting exciting - we'll have some additional unique features including visual sub-grade justification which will put to rest any accuracy questions. I dont have the business model or distribution figured out just yet but we have some strong first moves in mind.

swarmee
11-05-2020, 01:59 PM
Nice update. Interested to see where this goes. Please don't let someone buy the technology just to bury it.

mattglet
11-05-2020, 02:14 PM
The overlay feedback is going to be, as the kids say, lit.

Very exciting.

hcv123
04-21-2021, 12:55 PM
Figured it was time to pop this thread to the top.

Congratulations Kevin!

68Hawk
04-22-2021, 07:15 PM
Hey Kevin - I'm sure you're still watching over your baby and this thread also, so was wondering...

Could you give us an update on the progress of your work?

I understood how what you were developing would assist in recognizing a card's issue, measuring size for the standard, identifying damage to the surface and being able to do things like scan for alterations such as recoloring.

Is there a method you happened upon that would with certainty identify trimming, specifically as it has been described as being of assistance in the PSA linked articles?

To me, that is the biggest of the fraud alterations that cause the most angst and is hardest to say 100% has occurred because it's possible to make edges appear 'non fresh'.
We as hobbyists often make the assumption on what seems likely, but I feel grading companies fall more heavily on the side of what can they prove and be 100% positive in calling out.

Appreciate any new thoughts you might add to what we could be expecting as part of PSA purchasing your work.