by s_c_r on 12/10/18, 7:49 PM with 53 comments
by randcraw on 12/10/18, 9:32 PM
2) You could assess different delivery routes/ regions to determine if they are more/less on-time than other routes/ regions. Are the number of delivery vehicles adequate? When should you adjust the number of vehicles or change the routes themselves (like moving some peripheral regions to another route, or adjusting the cost charged when delivery is delayed).
3) When do external factors (like weather, esp rain or snow) introduce delays? Can you predict these delays, and ideally, compensate by changing routes or adding more delivery vehicles?
4) Should you more dynamically adjust your shipping fees to reflect faster/slower delivery time targets? This way you can tune your routes and manpower to save money for those who aren't as time sensitive, and improve the response time for those who are.
A lot of this is basic operations research. But you can call it AI, or use AI techniques just as well as traditional OR methods. Nobody will care what math/methods you use if you can add value.
by johngalt on 12/10/18, 10:34 PM
"I've just learned a neat new tool but I never apply it because I can solve all the problems in front of me with the tools I have."
In effect, you've found a local maximum where every direction seems like a step backwards, or an investment of time without any reasonable payoff.
Here are two general strategies to deal with this:
1. Take a well understood, and well documented existing need, and replicate the solution with the new toolkit. Acknowledge from the outset that this will be a step backwards, but go through the details anyway to better understand the technology. The goal isn't to make the system better, but to improve your understanding of ML and it's real world application. By choosing a well understood system, you are only learning applied ML rather than trying to simultaneously learn ML and the problem. Work toward parity with your existing methods. This part is rarely a big step forward, but I guarantee that this process will generate 100 good ideas about where to go next.
2. Find problems that were previously ignored, because they couldn't be solved. Something no one is even thinking to ask for, because none of the prior tools could do the job. This is the ideal situation because you are in a greenfield space where anything is an improvement. For ML specifically look at anywhere a lot of data is being generated but no one has the time to read it all unless something goes wrong.
When learning any new technology there is always a gap between learning it in the lab, and trying to execute with it IRL. The best way to maximize your own ability is to simply start applying it and building experience. Don't wait for a perfect halo project.
by cVwEq on 12/10/18, 9:20 PM
First, coding toy problems (related to shipping or not) that implement linear regression, genetic algorithms, or neural networks, etc. will be a useful start
Analyze shipping and tracking EDI data to predict whether a shipment will be late (0.0 to 1.0 output, 1.0 being it will be late for certain)
Predict the likelihood a customer will churn (stop using your services) based on changes in volume, billing amounts, and other characteristics
Predicting this year's peak season shipping volume based on past years' data. See if you can beat the marketing/sales folks' predictions
Identify factors correlated with the most profitable shippers
Predict the likelihood a package is damaged
Use a genetic algorithm to improve driver routing
Reconfigure pickup times / drop off times to improve profitability
Use EDI shipping data to build a network graph of who is shipping to whom, segmented by type of some sort. Say you find that many A-type firms are shipping to B-type firms; any B-type firms that are not already customers could be interesting targets.
Score prospects to estimate their profitability by comparing their characteristics to existing customers' profitabilities
Use a neural network (or something else) to analyze EDI shipping data, damage data, and make packaging recommendations to customers
Analyze tracking EDI data, segmented by delivery area (zip+4?) and see if there are areas where drivers are more efficient at delivering faster. Maybe start an initiative to look at what separates the most efficient drivers from the least.
Reporting: not sexy, but really useful in this space
Bona fides: I used to work in the supply chain consulting space and consulted at firms like yours. Things are surprisingly basic in the shipping space - less meaty data science than one might think.
Edit: Formatting
by kmax12 on 12/10/18, 10:13 PM
You mentioned using Pytorch. Instead, I recommend a classical machine learning using a library like scikit-learn (https://scikit-learn.org/). Use a random forest classifer and you'll get pretty good results out of the box.
If your data is in a postgres database across multiple tables, you will likely have to perform feature engineering in order to get it machine learning ready. For that, I recommend a library for automated feature engineering called Featuretools (http://github.com/featuretools/featuretools/). Here's a good article to get started with it (https://towardsdatascience.com/automated-feature-engineering...)
Finally, you will need to define a prediction problem and extract labeled training examples. I see people in this thread have suggested ideas of problems to work on. The key here is make sure that you pick a problem that you can both predict and take an action based off the prediction. For example, you could predict that there will be an influx of shipments to fulfill tomorrow, but that might not be enough to time to hire more people to help you fulfill them.
If you're curious what the process looks like end-to-end check out this blog series on a generalized framework for solving machine learning problems that was applied to customer churn prediction: https://blog.featurelabs.com/how-to-create-value-with-machin...
Full disclosure: I work for Feature Labs and develop Featuretools.
by SatvikBeri on 12/10/18, 8:06 PM
One big example is fraud: it's next-to-impossible to define a 100% accurate set of rules to filter fraud, but it's often easy to train an algorithm to catch the worst offenders, or flag suspicious cases to significantly narrow the amount a human needs to review.
by edraferi on 12/10/18, 8:28 PM
Note: your ML system will likely be less explainable than the existing rules. This won't matter as much if the current rule collection is already more complex than a human can deal with. It will matter a LOT if your decisions are subject to regulation.
by dmitrygr on 12/10/18, 8:09 PM
> I can't come up with any problems
> that I couldn't solve with a
> relational database (postgres) and
> a data transformation step.
Congratulations, you have seen through the hype! Most "machine learning" claims you see are solvable just with linear regressions on slightly cleaned up data.by screye on 12/10/18, 9:59 PM
If your problems aren't audio / image based, then consider using traditional ML instead.
If you are just starting out. Check out SKLearn, Scipy and graphical models like CRFs. They are tried in tested methods that also require less specialized skills.
As someone else said, a lot of AI, ML tools are simply repackaged old school OR methods. The older methods get 95% there, with <50% of the effort.
Cutting edge ML isn't required for most problems. Especially non visual or time series problems.
by garysieling on 12/10/18, 8:34 PM
by badgers on 12/10/18, 9:17 PM
by alimw on 12/10/18, 9:31 PM
by philip1209 on 12/10/18, 8:18 PM
by choward on 12/10/18, 9:20 PM
It sounds like you have the right tool for the job now, so great. Keep using it. Dependencies should be added to projects as conservatively as possible. The best dependency is no dependency. You shouldn't go seeking out dependencies. Your app doesn't depend on machine learning, so why would you make it depend on something it doesn't actually depend on? Future maintainers (including yourself) would hate you for it.
by nothrows on 12/10/18, 9:11 PM
by awa on 12/10/18, 9:11 PM
There's also scope of using ML in analytics and monitoring side apart from the main application and is generally better tolerated by the product team.
by davemp on 12/10/18, 9:00 PM
by mds on 12/10/18, 8:19 PM
https://www.ibm.com/watson/supply-chain/resources/csc/deskto...
One concrete example IBM likes to talk up is predicting shipping delays due to weather events and automatically recommending alternate suppliers.
by arandr0x on 12/10/18, 9:31 PM
There may be a way you can do some computer vision tasks for quality control in some parts of your business -- most businesses that deal in physical goods have quality control by visual inspection and in most of those you can end up with a CNN that provides a quick enough, good enough solution. However, sometimes for regulatory reasons it's not practical, or it's something that is not a critical part of the chain, and so on. But you could ask operations staff about whether they sometimes do that kind of task, and whether it takes up a lot of their day. It's not like you have to find the good idea alone.
by RoadieRoller on 12/11/18, 2:40 AM
My problem statement includes classifying hundreds of thousands of PDFs to different categories based on the content/first few pages. That is, if you have a pdf of a novel by Jeffrey Archer, it should be categorized as Entertainment or Novel etc. If you have a e-book of say Python for Dummies, it should be categorized as Engineering or Technology or Education or Programming and the like.
by elliekelly on 12/10/18, 11:11 PM
by c_moscardi on 12/10/18, 9:40 PM
5 minute deck: https://github.com/codingitforward/cdfdemoday2018/blob/maste...
Feel free to shoot me a message.
by laurentl on 12/11/18, 9:50 AM
If you have access to production logs and metrics, try to model things like page load times, server load, network latency, errors/timeout, number of page views/unique visitors.
You might hit on unexpected correlations and maybe unknown bugs (e.g. when page X is loaded with input Y, timeouts and server load increase because of a broken SQL request), or insights on the health of the production platform.
by sidlls on 12/10/18, 8:01 PM
Also consider that ML simply isn't a useful tool for your company. "Let's not spend (more) time trying various ML techniques" is a perfectly valid and useful outcome of an experiment using ML.
by gengstrand on 12/11/18, 4:10 AM
by thestorm_jpeg on 12/11/18, 1:20 AM
In the end, is it really worth all that effort if you end up using Cognitive Services on Azure or ML on AWS? How to you deploy your model and actually USE it in a WPF CRUD app?
by tugberkk on 12/11/18, 10:29 AM
It shows a link to an article called: "No, you don't need ML/AI. You need SQL"
by maxxxxx on 12/10/18, 8:17 PM
do it anyway with Machine Learning and see how it goes. at least you will know the expected results.
by leowoo91 on 12/10/18, 8:58 PM
by nl on 12/10/18, 9:16 PM
by gaius on 12/10/18, 8:40 PM
Having said that there must be a beancounter in your organisation whose job is making forecasts. That person wouldn’t hesitate to lay you off if the company hits a rough patch. I think you get my meaning.
by nektro on 12/10/18, 10:45 PM