Giter Site home page Giter Site logo

wzhe06 / sparrowrecsys Goto Github PK

View Code? Open in Web Editor NEW
2.3K 57.0 811.0 65.03 MB

A Deep Learning Recommender System

Home Page: http://wzhe.me/SparrowRecSys/

License: Apache License 2.0

HTML 11.60% Java 26.41% JavaScript 7.59% Scala 15.47% Python 38.93%
recommender-system deep-learning machine-learning

sparrowrecsys's Introduction

SparrowRecSys

SparrowRecSys是一个电影推荐系统,名字SparrowRecSys(麻雀推荐系统),取自“麻雀虽小,五脏俱全”之意。项目是一个基于maven的混合语言项目,同时包含了TensorFlow,Spark,Jetty Server等推荐系统的不同模块。希望你能够利用SparrowRecSys进行推荐系统的学习,并有机会一起完善它。

基于SparrowRecSys的实践课程

受极客时间邀请开设 深度学习推荐系统实战 课程,详细讲解了SparrowRecSys的所有技术细节,覆盖了深度学习模型结构,模型训练,特征工程,模型评估,模型线上服务及推荐服务器内部逻辑等模块。

环境要求

  • Java 8
  • Scala 2.11
  • Python 3.6+
  • TensorFlow 2.0+

快速开始

将项目用IntelliJ打开后,找到RecSysServer,右键点选Run,然后在浏览器中输入http://localhost:6010/即可看到推荐系统的前端效果。

项目数据

项目数据来源于开源电影数据集MovieLens,项目自带数据集对MovieLens数据集进行了精简,仅保留1000部电影和相关评论、用户数据。全量数据集请到MovieLens官方网站进行下载,推荐使用MovieLens 20M Dataset。

SparrowRecSys技术架构

SparrowRecSys技术架构遵循经典的工业级深度学习推荐系统架构,包括了离线数据处理、模型训练、近线的流处理、线上模型服务、前端推荐结果显示等多个模块。以下是SparrowRecSys的架构图: alt text

SparrowRecSys实现的深度学习模型

  • Word2vec (Item2vec)
  • DeepWalk (Random Walk based Graph Embedding)
  • Embedding MLP
  • Wide&Deep
  • Nerual CF
  • Two Towers
  • DeepFM
  • DIN(Deep Interest Network)

相关论文

其他相关资源

sparrowrecsys's People

Contributors

bfgf52 avatar birdviewhome avatar crazytianc avatar dependabot[bot] avatar gekfreeman avatar happyvictorwu avatar sniperdarksider avatar v-wx-v avatar wzhe06 avatar yiksanchan avatar zcxia23 avatar zhengjxu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

sparrowrecsys's Issues

推荐领域

基于这个项目,我是否可以改成文章推荐系统?

关于随机游走采样的方式

王老师您好,对于代码中randomwalk随机采样方式,有一些疑问,代码如下:

    while i < sampleLength:
        if (curElement not in itemDistribution) or (curElement not in transitionMatrix):
            break
        probDistribution = transitionMatrix[curElement]
        randomDouble = random.random()
        accumulateProb = 0.0
        for item, prob in probDistribution.items():
            accumulateProb += prob
            if accumulateProb >= randomDouble:
                curElement = item
                break
        sample.append(curElement)
        i += 1

对于上面的代码我的理解是先随机一个概率randomDouble然后按某个顺序probDistribution.items()遍历当前结点的所有邻接点,每个邻接点概率与原始概率accumulateProb相加,当累计相加大于randomDouble时就选择这个结点放入路径中。

个人感觉这种方式没有利用到probDistribution.items()中的概率分布,即便是使用无权边也即所有邻接点的概率相同,用老师的这种方式也是可以做的。我的想法是是否能够充分利用到邻接点的概率分布,对于概率大的(也即原始pair出现次数多的)给与更大的采样概率? 使用np.random.choice(adj_nodes, node_distributions)可以实现这种采样方式,而且效率可能比老师这种循环的方式来得更快。

而按老师的这种采样方式,如果将probDistribution.items()先按从大到小排序,然后再采样,应该也能达到充分利用邻接点概率分布的效果。

不知道我的理解对不对?

graph embedding中transitionCountMatrix的计算问题

源代码:

def generateTransitionMatrix(samples):
    pairSamples = samples.flatMap(lambda x: generate_pair(x))
    pairCountMap = pairSamples.countByValue()
    pairTotalCount = 0
    transitionCountMatrix = defaultdict(dict)
    itemCountMap = defaultdict(int)
    for key, cnt in pairCountMap.items():
        key1, key2 = key

        # 此处是否应该改为 += cnt
        transitionCountMatrix[key1][key2] = cnt
        itemCountMap[key1] += cnt
        pairTotalCount += cnt
        ......

修改:

    for key, cnt in pairCountMap.items():
        key1, key2 = key

        if key1 not in transitionCountMatrix or key2 not in transitionCountMatrix[key1]:
            transitionCountMatrix[key1][key2] = cnt
        else:
            transitionCountMatrix[key1][key2] += cnt

        itemCountMap[key1] += cnt
        pairTotalCount += cnt

一些不合理的地方?

你好,我在将系统运行起来之后,发现User136的推荐列表中出现了他曾看过的电影,这是否合理?
image

embedding.scala

userEmb = user._2.foldRightArray[Float]((row, newEmb) => {
val movieId = row.getAsString
val movieEmb = word2VecModel.getVectors.get(movieId)
if(movieEmb.isDefined){
newEmb.zip(movieEmb.get).map { case (x, y) => x + y }
}else{
newEmb
}
})

老师,这段代码能解释下吗?user._2.foldRightArray[Float],(userEmb) 是什么意思,row, newEmb,这些参数是啥意思 ,movieEmb.get 为啥能这么写,是什么意思,
if(movieEmb.isDefined){
newEmb.zip(movieEmb.get).map { case (x, y) => x + y }
}else{
newEmb
}
这个又是起什么作用?

运行jar包时提示:java.io.FileNotFoundException: nullsampledata/movies.csv

$ java -jar SparrowRecSys-1.0-SNAPSHOT-jar-with-dependencies.jar
log4j:WARN No appenders could be found for logger (org.eclipse.jetty.util.log).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
webRootLocation.toURI().toASCIIString(): jar:file:/mnt/e/work/SparrowRecSys/target/SparrowRecSys-1.0-SNAPSHOT-jar-with-dependencies.jar!/webroot/index.html
Web Root URI: null
Loading movie data from nullsampledata/movies.csv ...
Exception in thread "main" java.io.FileNotFoundException: nullsampledata/movies.csv (No such file or directory)
at java.base/java.io.FileInputStream.open0(Native Method)
at java.base/java.io.FileInputStream.open(FileInputStream.java:219)
at java.base/java.io.FileInputStream.(FileInputStream.java:157)
at java.base/java.util.Scanner.(Scanner.java:639)
at com.sparrowrecsys.online.datamanager.DataManager.loadMovieData(DataManager.java:56)
at com.sparrowrecsys.online.datamanager.DataManager.loadData(DataManager.java:41)
at com.sparrowrecsys.online.RecSysServer.run(RecSysServer.java:54)
at com.sparrowrecsys.online.RecSysServer.main(RecSysServer.java:21)

DIEN模型的loss为负数

DIEN模型bug?我使用sampledata进行测试,loss一直都是负数,感觉不太对,不确定这是什么带来的

tensorflow运行结果
Epoch 1/5
/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/functional.py:591: UserWarning: Input dict contained keys ['rating', 'timestamp', 'userAvgReleaseYear', 'userReleaseYearStddev'] which did not match any model input. They will be ignored by the model.
[n for n in tensors.keys() if n not in ref_input_names])
7403/7403 [==============================] - 175s 23ms/step - loss: 3.4184 - auc_6: 0.5772 - auc_value: 0.5565
Epoch 2/5
7403/7403 [==============================] - 162s 22ms/step - loss: -3.2109 - auc_6: 0.6725 - auc_value: 0.6455
Epoch 3/5
7403/7403 [==============================] - 162s 22ms/step - loss: -3.4127 - auc_6: 0.7511 - auc_value: 0.7350
Epoch 4/5
7403/7403 [==============================] - 162s 22ms/step - loss: -3.4589 - auc_6: 0.7959 - auc_value: 0.7831
Epoch 5/5
7403/7403 [==============================] - 162s 22ms/step - loss: -3.4888 - auc_6: 0.8209 - auc_value: 0.8140
1870/1870 [==============================] - 23s 11ms/step - loss: -3.3402 - auc_6: 0.7502 - auc_value: 0.7512

Test Loss -3.3401615619659424, Test Accuracy 0.7501789331436157, Test ROC AUC 0.7511806488037109,

以及DIEN.py的304行报错,
test_loss, test_roc_auc = model.evaluate(test_dataset)
ValueError: too many values to unpack (expected 2)
感觉应该是
test_loss, test_roc_auc ,test_accuracy = model.evaluate(test_dataset)

请教下关于FM如何处理序列的问题

王喆老师,请问下,我想参考张俊林老师的博客推荐系统召回四模型之:全能的FM模型
做一个FM的embedding召回。如果我想把用户看过的电影序列输入FM中,每个电影是否应该同属于一个field呢?

我的理解是电影之间似乎不需要什么交互,因此用户看过的所有电影作为特征输入是应该标注成为一个field。

另一个就是电影的不同类别(gener)特征,也即动作,悬疑,科幻这种,也是序列,但是这种应该是可以有特征交互的,所以输入FM中的时候应该标注为不同Field。

不知这个理解是否正确?

运行时报错

java: 程序包org.apache.flink.api.common.functions不存在
image

请问该怎么解决?

ERR wrong number of arguments for 'hset' command

用 spark 跑那个 特征工程的时候,把结果往 redis 里面写的时候,报错了,错误信息如下:

ERR wrong number of arguments for 'hset' command

错误的代码是这行:
redisClient.hset(userKey, JavaConversions.mapAsJavaMap(valueMap))

向量文件写不进来

Embedding.scala 如下代码,写操作无用,为何呢???????

val bw = new BufferedWriter(new FileWriter(file))
for (movieId <- model.getVectors.keys) {
  bw.write(movieId + "::" + model.getVectors(movieId).mkString(" ") + "\n")
  println(model.getVectors(movieId).mkString(" "))
}
bw.close()

执行FeatureEngineering 报错

Failed to execute goal org.codehaus.mojo:exec-maven-plugin:1.2:exec (default-cli) on project SparrowRecSys: Command execution failed.

DeepFM.py 运训报错

python:3.8.5
tensorflow:2.2.0
tensorflow.python.framework.errors_impl.UnimplementedError: Cast string to int32 is not supported

randomwalk函数

老师,你好,我是极客时间的学员,请问这个embedding.scala 中randomwalk函数 最后一行代码 Seq(sample.toList : _*)
sample.toList : 下划线*是什么意思? 冒号及下划线*是干嘛的?

运行FeatureEngForRecModel.py时报错

21/02/23 10:54:55 ERROR TaskSetManager: Task 0 in stage 10.0 failed 1 times; aborting job
21/02/23 10:54:55 WARN TaskSetManager: Lost task 0.0 in stage 11.0 (TID 210, localhost, executor driver): TaskKilled (Stage cancelled)
Traceback (most recent call last):
File "D:/code/sparrowrecsys/SparrowRecSys-master/RecPySpark/src/com/sparrowrecsys/offline/pyspark/featureeng/FeatureEngForRecModel.py", line 151, in
samplesWithMovieFeatures = addMovieFeatures(movieSamples, ratingSamplesWithLabel)
File "D:/code/sparrowrecsys/SparrowRecSys-master/RecPySpark/src/com/sparrowrecsys/offline/pyspark/featureeng/FeatureEngForRecModel.py", line 54, in addMovieFeatures
samplesWithMovies4.show(5, truncate=False)
File "D:\ProgramData\Anaconda3\envs\recoenv\lib\site-packages\pyspark\sql\dataframe.py", line 380, in show
print(self._jdf.showString(n, int(truncate), vertical))
File "D:\ProgramData\Anaconda3\envs\recoenv\lib\site-packages\py4j\java_gateway.py", line 1257, in call
answer, self.gateway_client, self.target_id, self.name)
File "D:\ProgramData\Anaconda3\envs\recoenv\lib\site-packages\pyspark\sql\utils.py", line 63, in deco
return f(*a, **kw)
File "D:\ProgramData\Anaconda3\envs\recoenv\lib\site-packages\py4j\protocol.py", line 328, in get_return_value
format(target_id, ".", name), value)
py4j.protocol.Py4JJavaError: An error occurred while calling o132.showString.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 10.0 failed 1 times, most recent failure: Lost task 0.0 in stage 10.0 (TID 209, localhost, executor driver): org.apache.spark.SparkException: Python worker failed to connect back.

建议API使用SpringBoot

个人建议API使SpringBoot, 如此代码更简单明了,而且熟悉的人也多。 当然Servlet是基础,如果需要可以,我可以帮忙

建议上传一份数据,方便小白(我)玩一下

# Training samples path, change to your local path
training_samples_file_path = tf.keras.utils.get_file("trainingSamples.csv",
                                                     "file:///Users/zhewang/Workspace/SparrowRecSys/src/main"
                                                     "/resources/webroot/sampledata/trainingSamples.csv")
# Test samples path, change to your local path
test_samples_file_path = tf.keras.utils.get_file("testSamples.csv",
                                                 "file:///Users/zhewang/Workspace/SparrowRecSys/src/main"
                                                 "/resources/webroot/sampledata/testSamples.csv")

tensorflow/serving上线问题

`[root@localhost temp]# docker run -t --rm -p 8501:8501 \

-v "$TESTDATA/saved_model_half_plus_two_cpu:/models/half_plus_two" \
-e MODEL_NAME=half_plus_two \
tensorflow/serving &

[1] 2053
[root@localhost temp]# /usr/bin/tf_serving_entrypoint.sh: line 3: 7 Illegal instruction (core dumped) tensorflow_model_server --port=8500 --rest_api_port=8501 --model_name=${MODEL_NAME} --model_base_path=${MODEL_BASE_PATH}/${MODEL_NAME} "$@"

[1]+ Exit 132 docker run -t --rm -p 8501:8501 -v "$TESTDATA/saved_model_half_plus_two_cpu:/models/half_plus_two" -e MODEL_NAME=half_plus_two tensorflow/serving
[root@localhost temp]# `

在docker里面【我的docker是安装在centos7中】执行上面那个命令的时候,一直都是出现上面这种情况,然后模型无法上线,请问有什么好的办法吗?

运行FeatureEngForRecModel.py报错

@bfgf52 您好,我在jupyter中运行这个文件中的代码
运行到这一步时报错:samplesWithUserFeatures = addUserFeatures(samplesWithMovieFeatures)
报错如下,麻烦帮忙看看问题出在哪?谢谢:
Py4JJavaError: An error occurred while calling o233.withColumn.
: org.apache.spark.sql.AnalysisException: cannot resolve 'reverse(userPositiveHistory)' due to data type mismatch: argument 1 requires string type, however, 'userPositiveHistory' is of array type.;;
'Project [movieId#27, userId#26, rating#28, timestamp#29, label#88, genres#12, releaseYear#122, movieGenre1#147, movieGenre2#156, movieGenre3#166, movieRatingCount#233L, movieAvgRating#190, movieRatingStddev#239, reverse(userPositiveHistory#354) AS userPositiveHistory#370]
+- Project [movieId#27, userId#26, rating#28, timestamp#29, label#88, genres#12, releaseYear#122, movieGenre1#147, movieGenre2#156, movieGenre3#166, movieRatingCount#233L, movieAvgRating#190, movieRatingStddev#239, userPositiveHistory#354]
+- Project [movieId#27, userId#26, rating#28, timestamp#29, label#88, genres#12, releaseYear#122, movieGenre1#147, movieGenre2#156, movieGenre3#166, movieRatingCount#233L, movieAvgRating#190, movieRatingStddev#239, _w0#355, userPositiveHistory#354, userPositiveHistory#354]
+- Window [collect_list(_w0#355, 0, 0) windowspecdefinition(userId#26, timestamp#29 ASC NULLS FIRST, specifiedwindowframe(RowFrame, -100, -1)) AS userPositiveHistory#354], [userId#26], [timestamp#29 ASC NULLS FIRST]
+- Project [movieId#27, userId#26, rating#28, timestamp#29, label#88, genres#12, releaseYear#122, movieGenre1#147, movieGenre2#156, movieGenre3#166, movieRatingCount#233L, movieAvgRating#190, movieRatingStddev#239, CASE WHEN (label#88 = 1) THEN movieId#27 ELSE cast(null as string) END AS _w0#355]
+- Project [movieId#27, userId#26, rating#28, timestamp#29, label#88, genres#12, releaseYear#122, movieGenre1#147, movieGenre2#156, movieGenre3#166, movieRatingCount#233L, movieAvgRating#190, movieRatingStddev#239]
+- Join LeftOuter, (movieId#27 = movieId#245)
:- Project [movieId#27, userId#26, rating#28, timestamp#29, label#88, genres#12, releaseYear#122, movieGenre1#147, movieGenre2#156, split(genres#12, |)[2] AS movieGenre3#166]
: +- Project [movieId#27, userId#26, rating#28, timestamp#29, label#88, genres#12, releaseYear#122, movieGenre1#147, split(genres#12, |)[1] AS movieGenre2#156]
: +- Project [movieId#27, userId#26, rating#28, timestamp#29, label#88, genres#12, releaseYear#122, split(genres#12, |)[0] AS movieGenre1#147]
: +- Project [movieId#27, userId#26, rating#28, timestamp#29, label#88, genres#12, releaseYear#122]
: +- Project [movieId#27, userId#26, rating#28, timestamp#29, label#88, (title#11) AS title#131, genres#12, releaseYear#122]
: +- Project [movieId#27, userId#26, rating#28, timestamp#29, label#88, title#11, genres#12, extractReleaseYearUdf(title#11) AS releaseYear#122]
: +- Project [movieId#27, userId#26, rating#28, timestamp#29, label#88, title#11, genres#12]
: +- Join LeftOuter, (movieId#27 = movieId#10)
: :- Project [userId#26, movieId#27, rating#28, timestamp#29, CASE WHEN (cast(rating#28 as double) >= 3.5) THEN 1 ELSE 0 END AS label#88]
: : +- Relation[userId#26,movieId#27,rating#28,timestamp#29] csv
: +- Relation[movieId#10,title#11,genres#12] csv
+- Project [movieId#245, movieRatingCount#233L, movieAvgRating#190, format_number(movieRatingStddev#234, 2) AS movieRatingStddev#239]
+- Project [movieId#245, coalesce(movieRatingCount#188L, cast(0.0 as bigint)) AS movieRatingCount#233L, movieAvgRating#190, coalesce(nanvl(movieRatingStddev#200, cast(null as double)), cast(0.0 as double)) AS movieRatingStddev#234]
+- Aggregate [movieId#245], [movieId#245, count(1) AS movieRatingCount#188L, format_number(avg(cast(rating#246 as double)), 2) AS movieAvgRating#190, stddev_samp(cast(rating#246 as double)) AS movieRatingStddev#200]
+- Project [movieId#245, userId#244, rating#246, timestamp#247, label#88, genres#12, releaseYear#122, movieGenre1#147, movieGenre2#156, split(genres#12, |)[2] AS movieGenre3#166]
+- Project [movieId#245, userId#244, rating#246, timestamp#247, label#88, genres#12, releaseYear#122, movieGenre1#147, split(genres#12, |)[1] AS movieGenre2#156]
+- Project [movieId#245, userId#244, rating#246, timestamp#247, label#88, genres#12, releaseYear#122, split(genres#12, |)[0] AS movieGenre1#147]
+- Project [movieId#245, userId#244, rating#246, timestamp#247, label#88, genres#12, releaseYear#122]
+- Project [movieId#245, userId#244, rating#246, timestamp#247, label#88, (title#11) AS title#131, genres#12, releaseYear#122]
+- Project [movieId#245, userId#244, rating#246, timestamp#247, label#88, title#11, genres#12, extractReleaseYearUdf(title#11) AS releaseYear#122]
+- Project [movieId#245, userId#244, rating#246, timestamp#247, label#88, title#11, genres#12]
+- Join LeftOuter, (movieId#245 = movieId#10)
:- Project [userId#244, movieId#245, rating#246, timestamp#247, CASE WHEN (cast(rating#246 as double) >= 3.5) THEN 1 ELSE 0 END AS label#88]
: +- Relation[userId#244,movieId#245,rating#246,timestamp#247] csv
+- Relation[movieId#10,title#11,genres#12] csv

提示程序包不存在

C:\Users\xu\Downloads\SparrowRecSys-master\SparrowRecSys-master\src\main\java\com\sparrowrecsys\nearline\flink\RealTimeFeature.java:3:45
java: 程序包org.apache.flink.api.common.functions不存在
C:\Users\xu\Downloads\SparrowRecSys-master\SparrowRecSys-master\src\main\java\com\sparrowrecsys\nearline\flink\RealTimeFeature.java:4:36
java: 程序包org.apache.flink.api.java.io不存在
C:\Users\xu\Downloads\SparrowRecSys-master\SparrowRecSys-master\src\main\java\com\sparrowrecsys\nearline\flink\RealTimeFeature.java:5:49
java: 程序包org.apache.flink.streaming.api.datastream不存在
C:\Users\xu\Downloads\SparrowRecSys-master\SparrowRecSys-master\src\main\java\com\sparrowrecsys\nearline\flink\RealTimeFeature.java:6:50
java: 程序包org.apache.flink.streaming.api.environment不存在
C:\Users\xu\Downloads\SparrowRecSys-master\SparrowRecSys-master\src\main\java\com\sparrowrecsys\nearline\flink\RealTimeFeature.java:7:53
java: 程序包org.apache.flink.streaming.api.functions.sink不存在
C:\Users\xu\Downloads\SparrowRecSys-master\SparrowRecSys-master\src\main\java\com\sparrowrecsys\nearline\flink\RealTimeFeature.java:8:55
java: 程序包org.apache.flink.streaming.api.functions.source不存在
C:\Users\xu\Downloads\SparrowRecSys-master\SparrowRecSys-master\src\main\java\com\sparrowrecsys\nearline\flink\RealTimeFeature.java:9:53
java: 程序包org.apache.flink.streaming.api.windowing.time不存在
C:\Users\xu\Downloads\SparrowRecSys-master\SparrowRecSys-master\src\main\java\com\sparrowrecsys\nearline\flink\RealTimeFeature.java:34:15
java: 找不到符号
符号: 类 StreamExecutionEnvironment
位置: 类 com.sparrowrecsys.nearline.flink.RealTimeFeature
C:\Users\xu\Downloads\SparrowRecSys-master\SparrowRecSys-master\src\main\java\com\sparrowrecsys\nearline\flink\RealTimeFeature.java:34:48
java: 找不到符号
符号: 变量 StreamExecutionEnvironment
位置: 类 com.sparrowrecsys.nearline.flink.RealTimeFeature

项目完善程度

请问老师现在是一个完整项目了吗?感觉似乎很多模块不知道去哪里访问?比如召回排序等

如何设置用户推荐模型为“"nerualcf"”

将user.html中第136行左右的model设置为model = "nerualcf";后,在对用户进行推荐的时候,有的用户使用的模型仍然是“emb”,比如user170。这是因为什么?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.