英语翻译Compared to the other two algorithms,the exponentiated algorithm has a rate θ associated with it.This rate needs to be set appropriately.In practice,we observed that the performance of the exponentiated algorithm is sensitive to the valu
来源:学生作业帮助网 编辑:作业帮 时间:2024/11/15 23:37:09
英语翻译Compared to the other two algorithms,the exponentiated algorithm has a rate θ associated with it.This rate needs to be set appropriately.In practice,we observed that the performance of the exponentiated algorithm is sensitive to the valu
英语翻译
Compared to the other two algorithms,the exponentiated algorithm has a rate θ associated with it.This rate needs to be set appropriately.In practice,we observed that the performance of the exponentiated algorithm is sensitive to the value of the rate.In particular,we multiplied the rate θ by a numerical value and studied how the algorithm be- haved.Note that this effectively changes the radius of the data,but seemed to significantly affect the behavior of the exponentiated algorithm.The results of this experiment is shown in Figure 4.The performance of the algorithm first improves and then deteriorates as the rate factor increases.
We proposed three algorithms to learn diversity from im- plicit feedback.In this section,we study whether there is a difference in performance of these three algorithms.The clipped DP (Algorithm 3) was proposed mainly due to theo-
retical considerations.To compare the three algorithms,we followed the same setup as in Section 6.2.For the exponenti- ated algorithm,we considered the best rate parameter from the previous experiment.The results for this experiment are shown in Figure 5.It can be seen that there is not much of a difference between the clipped and the non-clipped al- gorithms in the case of RCV-1.In the case of 20NG,there is hardly any difference between the three algorithms.Even though restricting weights to positive values is required for theoretical purposes,in practice it does not seem to make much of a difference on these two datasets.
英语翻译Compared to the other two algorithms,the exponentiated algorithm has a rate θ associated with it.This rate needs to be set appropriately.In practice,we observed that the performance of the exponentiated algorithm is sensitive to the valu
大意如此:一些专业词汇可能翻译的略有模糊.
相较于其他两种算法,取幂算法具有与它相关联的θ速率.此速率需要适当地设定.在实践中,我们观察到的取幂算法的优点在于对速率值的敏感.特别是,我们会成倍地改变θ的数值,研究如何算法是如何实现的.请注意,这将明显地改变的数据的取值范围,但似乎会显著影响取幂算法的过程.本实验的结果在图4中所示.该算法的效能提高,同时劣化率系数的也有所增加.
我们提出了三个算法来学习隐反馈的多样性.在本节中,我们研究这三种算法的效能是否有区别.剪切DP(算法3)提出,主要是出于theo-retical的考虑.为了比较这三种算法,我们遵循6.2节中相同的设定.该指数计算的算法,我们考虑最好用以前的参数.本实验的结果示于图5.可以看出,在实验RCV-1中,clipped 和 non-clipped al-gorithms并没有太大差别.在20NG的情况下,三种算法几乎没有任何区别.尽管有限制权重的理论的要求,但在实践中似乎并没有使这两个数据集产生多大的差别.
相比其他两种算法,幂算法具有速度θ与之相关的。这需要适当地设置。在实践中,我们观察到的幂算法的性能是敏感率的价值。特别是,我们乘率的数值研究θ算法如何会有。请注意,这ff地变化数据的半径,但似乎有着一fiffECT的幂算法的行为。这个实验的结果如图4所示。该算法fiRST的性能提高和然后下降速率系数增加。我们提出了三种算法来从我学习隐反馈的多样性。在...
全部展开
相比其他两种算法,幂算法具有速度θ与之相关的。这需要适当地设置。在实践中,我们观察到的幂算法的性能是敏感率的价值。特别是,我们乘率的数值研究θ算法如何会有。请注意,这ff地变化数据的半径,但似乎有着一fiffECT的幂算法的行为。这个实验的结果如图4所示。该算法fiRST的性能提高和然后下降速率系数增加。我们提出了三种算法来从我学习隐反馈的多样性。在本节中,我们研究是否有这三种算法的性能直接ff分。裁剪的DP(算法3)提出主要是由于理论的思考。比较了三种算法,我们遵循相同的设置,如第6.2节。该指数计算的算法,我们考虑从以前的实验的最佳速率参数。这个实验的结果如图5所示。可以看出,没有太多的二fference之间的夹不夹铝在rcv-1案例的比对。在20ng的情况下,几乎没有任何地ff三算法之间的差异。尽管限制权重正面的价值观为理论的目的要求,实际上它并没有太大的差异在这两个数据集ff迪
收起